Sep 13 00:16:03.968895 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:16:03.968928 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:16:03.968944 kernel: BIOS-provided physical RAM map: Sep 13 00:16:03.968953 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:16:03.968962 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:16:03.968971 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:16:03.968981 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:16:03.968990 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:16:03.968999 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:16:03.969008 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:16:03.969020 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 13 00:16:03.969029 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 13 00:16:03.969042 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 13 00:16:03.969051 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 13 00:16:03.969066 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:16:03.969076 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:16:03.969089 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:16:03.969099 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:16:03.969108 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:16:03.969117 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:16:03.969127 kernel: NX (Execute Disable) protection: active Sep 13 00:16:03.969136 kernel: APIC: Static calls initialized Sep 13 00:16:03.969145 kernel: efi: EFI v2.7 by EDK II Sep 13 00:16:03.969183 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 13 00:16:03.969193 kernel: SMBIOS 2.8 present. Sep 13 00:16:03.969203 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 13 00:16:03.969212 kernel: Hypervisor detected: KVM Sep 13 00:16:03.969226 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:16:03.969236 kernel: kvm-clock: using sched offset of 5785049548 cycles Sep 13 00:16:03.969246 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:16:03.969256 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:16:03.969266 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:16:03.969277 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:16:03.969286 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 13 00:16:03.969296 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:16:03.969306 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:16:03.969320 kernel: Using GB pages for direct mapping Sep 13 00:16:03.969329 kernel: Secure boot disabled Sep 13 00:16:03.969339 kernel: ACPI: Early table checksum verification disabled Sep 13 00:16:03.969349 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:16:03.969365 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:16:03.969375 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:03.969385 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:03.969399 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:16:03.969409 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:03.969425 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:03.969436 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:03.969446 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:03.969456 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:16:03.969466 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:16:03.969481 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:16:03.969491 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:16:03.969501 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:16:03.969511 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:16:03.969521 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:16:03.969531 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:16:03.969541 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:16:03.969552 kernel: No NUMA configuration found Sep 13 00:16:03.969565 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 13 00:16:03.969579 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 13 00:16:03.969590 kernel: Zone ranges: Sep 13 00:16:03.969600 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:16:03.969610 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 13 00:16:03.969633 kernel: Normal empty Sep 13 00:16:03.969644 kernel: Movable zone start for each node Sep 13 00:16:03.969654 kernel: Early memory node ranges Sep 13 00:16:03.969664 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:16:03.969674 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:16:03.969689 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:16:03.969699 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 13 00:16:03.969709 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 13 00:16:03.969719 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 13 00:16:03.969733 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 13 00:16:03.969743 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:16:03.969753 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:16:03.969764 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:16:03.969774 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:16:03.969784 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 13 00:16:03.969798 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:16:03.969808 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 13 00:16:03.969819 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:16:03.969829 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:16:03.969839 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:16:03.969849 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:16:03.969860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:16:03.969869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:16:03.969880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:16:03.969894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:16:03.969904 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:16:03.969914 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:16:03.969925 kernel: TSC deadline timer available Sep 13 00:16:03.969935 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:16:03.969945 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:16:03.969955 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:16:03.969965 kernel: kvm-guest: setup PV sched yield Sep 13 00:16:03.969976 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:16:03.969990 kernel: Booting paravirtualized kernel on KVM Sep 13 00:16:03.970001 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:16:03.970011 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:16:03.970021 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 13 00:16:03.970032 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 13 00:16:03.970042 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:16:03.970067 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:16:03.970086 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:16:03.970116 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:16:03.970197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:16:03.970228 kernel: random: crng init done Sep 13 00:16:03.970239 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:16:03.970250 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:16:03.970260 kernel: Fallback order for Node 0: 0 Sep 13 00:16:03.970270 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 13 00:16:03.970281 kernel: Policy zone: DMA32 Sep 13 00:16:03.970291 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:16:03.970308 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 166140K reserved, 0K cma-reserved) Sep 13 00:16:03.970318 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:16:03.970329 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:16:03.970339 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:16:03.970349 kernel: Dynamic Preempt: voluntary Sep 13 00:16:03.970371 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:16:03.970390 kernel: rcu: RCU event tracing is enabled. Sep 13 00:16:03.970402 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:16:03.970413 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:16:03.970424 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:16:03.970435 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:16:03.970446 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:16:03.970461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:16:03.970472 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:16:03.970488 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:16:03.970499 kernel: Console: colour dummy device 80x25 Sep 13 00:16:03.970509 kernel: printk: console [ttyS0] enabled Sep 13 00:16:03.970524 kernel: ACPI: Core revision 20230628 Sep 13 00:16:03.970535 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:16:03.970546 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:16:03.970556 kernel: x2apic enabled Sep 13 00:16:03.970567 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:16:03.970578 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:16:03.970589 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:16:03.970600 kernel: kvm-guest: setup PV IPIs Sep 13 00:16:03.970611 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:16:03.970639 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:16:03.970650 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:16:03.970660 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:16:03.970671 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:16:03.970682 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:16:03.970694 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:16:03.970705 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:16:03.970715 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:16:03.970731 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:16:03.970742 kernel: active return thunk: retbleed_return_thunk Sep 13 00:16:03.970753 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:16:03.970764 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:16:03.970775 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:16:03.970790 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:16:03.970802 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:16:03.970813 kernel: active return thunk: srso_return_thunk Sep 13 00:16:03.970824 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:16:03.970839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:16:03.970850 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:16:03.970861 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:16:03.970872 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:16:03.970883 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:16:03.970894 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:16:03.970905 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:16:03.970915 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:16:03.970926 kernel: landlock: Up and running. Sep 13 00:16:03.970941 kernel: SELinux: Initializing. Sep 13 00:16:03.970952 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:16:03.970963 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:16:03.970974 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:16:03.970985 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:16:03.970997 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:16:03.971008 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:16:03.971018 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:16:03.971033 kernel: ... version: 0 Sep 13 00:16:03.971044 kernel: ... bit width: 48 Sep 13 00:16:03.971055 kernel: ... generic registers: 6 Sep 13 00:16:03.971066 kernel: ... value mask: 0000ffffffffffff Sep 13 00:16:03.971076 kernel: ... max period: 00007fffffffffff Sep 13 00:16:03.971087 kernel: ... fixed-purpose events: 0 Sep 13 00:16:03.971098 kernel: ... event mask: 000000000000003f Sep 13 00:16:03.971109 kernel: signal: max sigframe size: 1776 Sep 13 00:16:03.971120 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:16:03.971131 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:16:03.971147 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:16:03.971173 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:16:03.971184 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:16:03.971194 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:16:03.971205 kernel: smpboot: Max logical packages: 1 Sep 13 00:16:03.971216 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:16:03.971227 kernel: devtmpfs: initialized Sep 13 00:16:03.971238 kernel: x86/mm: Memory block size: 128MB Sep 13 00:16:03.971249 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:16:03.971265 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:16:03.971277 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 13 00:16:03.971288 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:16:03.971298 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:16:03.971310 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:16:03.971320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:16:03.971331 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:16:03.971342 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:16:03.971353 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:16:03.971368 kernel: audit: type=2000 audit(1757722563.392:1): state=initialized audit_enabled=0 res=1 Sep 13 00:16:03.971379 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:16:03.971389 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:16:03.971400 kernel: cpuidle: using governor menu Sep 13 00:16:03.971411 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:16:03.971422 kernel: dca service started, version 1.12.1 Sep 13 00:16:03.971438 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:16:03.971465 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:16:03.971489 kernel: PCI: Using configuration type 1 for base access Sep 13 00:16:03.971527 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:16:03.971554 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:16:03.971580 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:16:03.971591 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:16:03.971602 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:16:03.971612 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:16:03.971638 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:16:03.971649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:16:03.971660 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:16:03.971675 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:16:03.971685 kernel: ACPI: Interpreter enabled Sep 13 00:16:03.971696 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:16:03.971707 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:16:03.971718 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:16:03.971729 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:16:03.971739 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:16:03.971750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:16:03.972103 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:16:03.972339 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:16:03.972544 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:16:03.972561 kernel: PCI host bridge to bus 0000:00 Sep 13 00:16:03.972806 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:16:03.972970 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:16:03.973128 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:16:03.973339 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:16:03.973526 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:16:03.973694 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 13 00:16:03.973852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:16:03.974086 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:16:03.974294 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:16:03.974489 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:16:03.974671 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 13 00:16:03.974847 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:16:03.975020 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 13 00:16:03.975227 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:16:03.975409 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:16:03.975586 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 13 00:16:03.975814 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 13 00:16:03.976039 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 13 00:16:03.976359 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:16:03.976541 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 13 00:16:03.976727 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 13 00:16:03.976901 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 13 00:16:03.977175 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:16:03.977365 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 13 00:16:03.977536 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 13 00:16:03.977725 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 13 00:16:03.977909 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 13 00:16:03.978139 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:16:03.978355 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:16:03.978565 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:16:03.978768 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 13 00:16:03.978942 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 13 00:16:03.979125 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:16:03.979417 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 13 00:16:03.979436 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:16:03.979448 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:16:03.979459 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:16:03.979477 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:16:03.979488 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:16:03.979500 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:16:03.979511 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:16:03.979523 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:16:03.979534 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:16:03.979545 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:16:03.979556 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:16:03.979567 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:16:03.979583 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:16:03.979594 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:16:03.979605 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:16:03.979629 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:16:03.979641 kernel: iommu: Default domain type: Translated Sep 13 00:16:03.979652 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:16:03.979664 kernel: efivars: Registered efivars operations Sep 13 00:16:03.979675 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:16:03.979686 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:16:03.979702 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:16:03.979714 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 13 00:16:03.979725 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 13 00:16:03.979736 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 13 00:16:03.979915 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:16:03.980093 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:16:03.980286 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:16:03.980303 kernel: vgaarb: loaded Sep 13 00:16:03.980315 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:16:03.980333 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:16:03.980345 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:16:03.980356 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:16:03.980367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:16:03.980379 kernel: pnp: PnP ACPI init Sep 13 00:16:03.980630 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:16:03.980651 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:16:03.980662 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:16:03.980679 kernel: NET: Registered PF_INET protocol family Sep 13 00:16:03.980690 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:16:03.980702 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:16:03.980714 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:16:03.980725 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:16:03.980736 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:16:03.980748 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:16:03.980759 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:16:03.980770 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:16:03.980786 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:16:03.980797 kernel: NET: Registered PF_XDP protocol family Sep 13 00:16:03.980997 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 13 00:16:03.981219 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 13 00:16:03.981384 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:16:03.981541 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:16:03.981711 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:16:03.981868 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:16:03.982031 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:16:03.982208 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 13 00:16:03.982225 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:16:03.982237 kernel: Initialise system trusted keyrings Sep 13 00:16:03.982248 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:16:03.982259 kernel: Key type asymmetric registered Sep 13 00:16:03.982270 kernel: Asymmetric key parser 'x509' registered Sep 13 00:16:03.982281 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:16:03.982298 kernel: io scheduler mq-deadline registered Sep 13 00:16:03.982309 kernel: io scheduler kyber registered Sep 13 00:16:03.982320 kernel: io scheduler bfq registered Sep 13 00:16:03.982332 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:16:03.982344 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:16:03.982355 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:16:03.982366 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:16:03.982377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:16:03.982389 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:16:03.982405 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:16:03.982416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:16:03.982427 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:16:03.982637 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:16:03.982655 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:16:03.982815 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:16:03.982974 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:16:03 UTC (1757722563) Sep 13 00:16:03.983132 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:16:03.983184 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:16:03.983197 kernel: efifb: probing for efifb Sep 13 00:16:03.983209 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 13 00:16:03.983219 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 13 00:16:03.983230 kernel: efifb: scrolling: redraw Sep 13 00:16:03.983241 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 13 00:16:03.983251 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:16:03.983287 kernel: fb0: EFI VGA frame buffer device Sep 13 00:16:03.983302 kernel: pstore: Using crash dump compression: deflate Sep 13 00:16:03.983316 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:16:03.983328 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:16:03.983339 kernel: Segment Routing with IPv6 Sep 13 00:16:03.983350 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:16:03.983362 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:16:03.983373 kernel: Key type dns_resolver registered Sep 13 00:16:03.983384 kernel: IPI shorthand broadcast: enabled Sep 13 00:16:03.983395 kernel: sched_clock: Marking stable (982004130, 134246101)->(1284769221, -168518990) Sep 13 00:16:03.983407 kernel: registered taskstats version 1 Sep 13 00:16:03.983422 kernel: Loading compiled-in X.509 certificates Sep 13 00:16:03.983433 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:16:03.983445 kernel: Key type .fscrypt registered Sep 13 00:16:03.983455 kernel: Key type fscrypt-provisioning registered Sep 13 00:16:03.983466 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:16:03.983478 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:16:03.983489 kernel: ima: No architecture policies found Sep 13 00:16:03.983500 kernel: clk: Disabling unused clocks Sep 13 00:16:03.983512 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:16:03.983527 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:16:03.983539 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:16:03.983551 kernel: Run /init as init process Sep 13 00:16:03.983562 kernel: with arguments: Sep 13 00:16:03.983573 kernel: /init Sep 13 00:16:03.983584 kernel: with environment: Sep 13 00:16:03.983594 kernel: HOME=/ Sep 13 00:16:03.983605 kernel: TERM=linux Sep 13 00:16:03.983628 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:16:03.983651 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:16:03.983666 systemd[1]: Detected virtualization kvm. Sep 13 00:16:03.983679 systemd[1]: Detected architecture x86-64. Sep 13 00:16:03.983691 systemd[1]: Running in initrd. Sep 13 00:16:03.983709 systemd[1]: No hostname configured, using default hostname. Sep 13 00:16:03.983721 systemd[1]: Hostname set to . Sep 13 00:16:03.983733 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:16:03.983745 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:16:03.983757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:16:03.983770 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:16:03.983783 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:16:03.983796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:16:03.983812 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:16:03.983825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:16:03.983840 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:16:03.983853 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:16:03.983865 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:16:03.983877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:16:03.983889 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:16:03.983907 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:16:03.983922 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:16:03.983934 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:16:03.983946 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:16:03.983959 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:16:03.983971 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:16:03.983983 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:16:03.983995 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:16:03.984011 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:16:03.984023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:16:03.984036 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:16:03.984048 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:16:03.984060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:16:03.984073 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:16:03.984085 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:16:03.984097 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:16:03.984110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:16:03.984126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:03.984138 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:16:03.984206 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:16:03.984248 systemd-journald[192]: Collecting audit messages is disabled. Sep 13 00:16:03.984283 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:16:03.984297 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:16:03.984310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:03.984323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:16:03.984339 systemd-journald[192]: Journal started Sep 13 00:16:03.984363 systemd-journald[192]: Runtime Journal (/run/log/journal/1baa28fd8f424ebe8ec42685475a73a4) is 6.0M, max 48.3M, 42.2M free. Sep 13 00:16:03.973242 systemd-modules-load[193]: Inserted module 'overlay' Sep 13 00:16:04.004425 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:16:04.005181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:16:04.017340 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:16:04.019455 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:16:04.022130 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:16:04.024838 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 13 00:16:04.025761 kernel: Bridge firewalling registered Sep 13 00:16:04.027287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:16:04.030671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:16:04.033661 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:04.034257 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:16:04.038958 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:16:04.049997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:16:04.054042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:16:04.082215 dracut-cmdline[223]: dracut-dracut-053 Sep 13 00:16:04.083411 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:16:04.088804 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:16:04.133392 systemd-resolved[232]: Positive Trust Anchors: Sep 13 00:16:04.133420 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:16:04.133467 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:16:04.137369 systemd-resolved[232]: Defaulting to hostname 'linux'. Sep 13 00:16:04.139101 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:16:04.143921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:16:04.219206 kernel: SCSI subsystem initialized Sep 13 00:16:04.230236 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:16:04.244235 kernel: iscsi: registered transport (tcp) Sep 13 00:16:04.273217 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:16:04.273316 kernel: QLogic iSCSI HBA Driver Sep 13 00:16:04.357892 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:16:04.369466 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:16:04.402073 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:16:04.402196 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:16:04.402211 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:16:04.450211 kernel: raid6: avx2x4 gen() 22576 MB/s Sep 13 00:16:04.467244 kernel: raid6: avx2x2 gen() 25270 MB/s Sep 13 00:16:04.484425 kernel: raid6: avx2x1 gen() 20576 MB/s Sep 13 00:16:04.484527 kernel: raid6: using algorithm avx2x2 gen() 25270 MB/s Sep 13 00:16:04.502337 kernel: raid6: .... xor() 17526 MB/s, rmw enabled Sep 13 00:16:04.502446 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:16:04.531218 kernel: xor: automatically using best checksumming function avx Sep 13 00:16:04.730219 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:16:04.751670 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:16:04.763009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:16:04.780244 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 13 00:16:04.787082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:16:04.804183 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:16:04.835468 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 13 00:16:04.887305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:16:04.896774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:16:04.990730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:16:05.002422 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:16:05.019853 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:16:05.024502 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:16:05.027754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:16:05.030928 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:16:05.041246 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:16:05.041435 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:16:05.046178 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:16:05.049713 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:16:05.065398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:16:05.076398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:16:05.078057 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:16:05.078097 kernel: AES CTR mode by8 optimization enabled Sep 13 00:16:05.078690 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:05.080829 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:16:05.085713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:16:05.090412 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:16:05.090441 kernel: GPT:9289727 != 19775487 Sep 13 00:16:05.090458 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:16:05.090474 kernel: GPT:9289727 != 19775487 Sep 13 00:16:05.090488 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:16:05.085986 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:05.095427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:05.092030 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:05.100216 kernel: libata version 3.00 loaded. Sep 13 00:16:05.104893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:05.113099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:16:05.116874 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:16:05.117201 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:16:05.114953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:05.122397 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:16:05.123200 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:16:05.128059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:05.130830 kernel: scsi host0: ahci Sep 13 00:16:05.133182 kernel: scsi host1: ahci Sep 13 00:16:05.140240 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Sep 13 00:16:05.143176 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (457) Sep 13 00:16:05.149174 kernel: scsi host2: ahci Sep 13 00:16:05.155177 kernel: scsi host3: ahci Sep 13 00:16:05.155472 kernel: scsi host4: ahci Sep 13 00:16:05.157647 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:16:05.164225 kernel: scsi host5: ahci Sep 13 00:16:05.164455 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 13 00:16:05.164474 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 13 00:16:05.164495 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 13 00:16:05.164509 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 13 00:16:05.164523 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 13 00:16:05.164433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:05.169358 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 13 00:16:05.179540 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:16:05.194502 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:16:05.196407 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:16:05.206261 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:16:05.218505 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:16:05.221607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:16:05.228296 disk-uuid[568]: Primary Header is updated. Sep 13 00:16:05.228296 disk-uuid[568]: Secondary Entries is updated. Sep 13 00:16:05.228296 disk-uuid[568]: Secondary Header is updated. Sep 13 00:16:05.233182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:05.240197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:05.246888 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:05.473193 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:05.473284 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:05.474187 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:05.481181 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:05.481251 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:16:05.482451 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:16:05.482480 kernel: ata3.00: applying bridge limits Sep 13 00:16:05.483509 kernel: ata3.00: configured for UDMA/100 Sep 13 00:16:05.484182 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:05.485190 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:16:05.531208 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:16:05.531673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:16:05.545184 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:16:06.266173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:06.266884 disk-uuid[570]: The operation has completed successfully. Sep 13 00:16:06.297260 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:16:06.297459 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:16:06.333617 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:16:06.339652 sh[593]: Success Sep 13 00:16:06.368231 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:16:06.411416 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:16:06.431431 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:16:06.435137 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:16:06.474295 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:16:06.474391 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:06.474404 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:16:06.475388 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:16:06.477185 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:16:06.483796 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:16:06.485355 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:16:06.500442 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:16:06.514261 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:16:06.554192 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:16:06.554290 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:06.554309 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:16:06.559281 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:16:06.570627 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:16:06.588465 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:16:06.660337 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:16:06.674364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:16:06.704399 systemd-networkd[771]: lo: Link UP Sep 13 00:16:06.704411 systemd-networkd[771]: lo: Gained carrier Sep 13 00:16:06.706389 systemd-networkd[771]: Enumeration completed Sep 13 00:16:06.706555 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:16:06.706898 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:06.706904 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:16:06.708519 systemd-networkd[771]: eth0: Link UP Sep 13 00:16:06.708524 systemd-networkd[771]: eth0: Gained carrier Sep 13 00:16:06.708532 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:06.709424 systemd[1]: Reached target network.target - Network. Sep 13 00:16:06.729257 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:16:06.751222 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:16:06.757572 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:16:06.832566 ignition[777]: Ignition 2.19.0 Sep 13 00:16:06.832588 ignition[777]: Stage: fetch-offline Sep 13 00:16:06.832641 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:06.832654 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:06.832791 ignition[777]: parsed url from cmdline: "" Sep 13 00:16:06.832796 ignition[777]: no config URL provided Sep 13 00:16:06.832803 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:16:06.832815 ignition[777]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:16:06.832853 ignition[777]: op(1): [started] loading QEMU firmware config module Sep 13 00:16:06.832860 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:16:06.845206 ignition[777]: op(1): [finished] loading QEMU firmware config module Sep 13 00:16:06.845243 ignition[777]: QEMU firmware config was not found. Ignoring... Sep 13 00:16:06.887472 ignition[777]: parsing config with SHA512: 03978bee71158a3b6c4b7d82b75734a10c9cf01373218a9a0ba94c886e341fc5a570f66ebcb0770f9e124afc5201a1a407d46bb286c50fafe30b935a35933540 Sep 13 00:16:06.894250 unknown[777]: fetched base config from "system" Sep 13 00:16:06.894276 unknown[777]: fetched user config from "qemu" Sep 13 00:16:06.895651 ignition[777]: fetch-offline: fetch-offline passed Sep 13 00:16:06.895765 ignition[777]: Ignition finished successfully Sep 13 00:16:06.902291 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:16:06.904959 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:16:06.918846 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:16:06.989470 ignition[786]: Ignition 2.19.0 Sep 13 00:16:06.989493 ignition[786]: Stage: kargs Sep 13 00:16:06.989749 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:06.989766 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:06.990890 ignition[786]: kargs: kargs passed Sep 13 00:16:06.990955 ignition[786]: Ignition finished successfully Sep 13 00:16:06.998799 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:16:07.010491 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:16:07.050733 ignition[794]: Ignition 2.19.0 Sep 13 00:16:07.050758 ignition[794]: Stage: disks Sep 13 00:16:07.051043 ignition[794]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:07.051059 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:07.052307 ignition[794]: disks: disks passed Sep 13 00:16:07.052394 ignition[794]: Ignition finished successfully Sep 13 00:16:07.058967 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:16:07.062357 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:16:07.062945 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:16:07.063508 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:16:07.075496 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:16:07.080718 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:16:07.090594 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:16:07.111660 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:16:07.149660 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:16:07.162722 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:16:07.322243 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:16:07.322908 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:16:07.324340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:16:07.340688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:16:07.343456 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:16:07.344430 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:16:07.380322 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Sep 13 00:16:07.344496 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:16:07.385814 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:16:07.385850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:07.385865 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:16:07.344546 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:16:07.388240 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:16:07.390806 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:16:07.416793 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:16:07.435452 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:16:07.478328 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:16:07.500068 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:16:07.507189 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:16:07.514897 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:16:07.639571 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:16:07.656421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:16:07.667084 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:16:07.674334 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:16:07.675394 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:16:07.721072 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:16:07.779097 ignition[930]: INFO : Ignition 2.19.0 Sep 13 00:16:07.779097 ignition[930]: INFO : Stage: mount Sep 13 00:16:07.781467 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:07.781467 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:07.781467 ignition[930]: INFO : mount: mount passed Sep 13 00:16:07.781467 ignition[930]: INFO : Ignition finished successfully Sep 13 00:16:07.786535 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:16:07.796408 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:16:07.806001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:16:07.821326 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 13 00:16:07.821388 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:16:07.823011 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:07.823036 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:16:07.827191 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:16:07.829781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:16:07.857892 ignition[957]: INFO : Ignition 2.19.0 Sep 13 00:16:07.857892 ignition[957]: INFO : Stage: files Sep 13 00:16:07.859794 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:07.859794 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:07.859794 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:16:07.864211 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:16:07.864211 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:16:07.869927 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:16:07.871642 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:16:07.873200 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:16:07.872593 unknown[957]: wrote ssh authorized keys file for user: core Sep 13 00:16:07.876383 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:16:07.876383 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:16:07.934704 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:16:08.144911 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:16:08.148025 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:16:08.346578 systemd-networkd[771]: eth0: Gained IPv6LL Sep 13 00:16:08.497561 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:16:09.690981 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:16:09.690981 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:16:09.697994 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:16:10.163248 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:16:10.171708 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:16:10.173988 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:16:10.173988 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:16:10.173988 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:16:10.173988 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:16:10.173988 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:16:10.173988 ignition[957]: INFO : files: files passed Sep 13 00:16:10.173988 ignition[957]: INFO : Ignition finished successfully Sep 13 00:16:10.188812 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:16:10.200530 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:16:10.202803 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:16:10.214751 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:16:10.215079 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:16:10.222971 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:16:10.228053 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:16:10.228053 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:16:10.238758 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:16:10.244072 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:16:10.246287 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:16:10.270598 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:16:10.311168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:16:10.312654 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:16:10.317730 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:16:10.321287 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:16:10.323802 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:16:10.336535 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:16:10.359932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:16:10.379578 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:16:10.395387 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:16:10.398012 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:16:10.400809 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:16:10.402853 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:16:10.404045 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:16:10.406957 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:16:10.409105 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:16:10.411027 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:16:10.413332 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:16:10.415741 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:16:10.418032 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:16:10.420309 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:16:10.422879 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:16:10.424953 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:16:10.427002 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:16:10.428700 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:16:10.429779 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:16:10.432123 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:16:10.434306 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:16:10.436690 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:16:10.437682 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:16:10.440240 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:16:10.440384 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:16:10.443657 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:16:10.444806 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:16:10.447279 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:16:10.472080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:16:10.473451 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:16:10.476501 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:16:10.478421 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:16:10.478831 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:16:10.478990 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:16:10.480900 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:16:10.481001 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:16:10.484507 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:16:10.484691 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:16:10.485873 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:16:10.486002 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:16:10.502644 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:16:10.505959 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:16:10.506563 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:16:10.506762 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:16:10.510801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:16:10.510994 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:16:10.520667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:16:10.520866 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:16:10.545345 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:16:10.572715 ignition[1011]: INFO : Ignition 2.19.0 Sep 13 00:16:10.572715 ignition[1011]: INFO : Stage: umount Sep 13 00:16:10.572715 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:10.572715 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:10.572715 ignition[1011]: INFO : umount: umount passed Sep 13 00:16:10.572715 ignition[1011]: INFO : Ignition finished successfully Sep 13 00:16:10.579006 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:16:10.580168 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:16:10.582723 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:16:10.583830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:16:10.587455 systemd[1]: Stopped target network.target - Network. Sep 13 00:16:10.589498 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:16:10.589588 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:16:10.592842 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:16:10.593895 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:16:10.596141 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:16:10.597137 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:16:10.599311 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:16:10.599379 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:16:10.602674 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:16:10.603773 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:16:10.606291 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:16:10.608747 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:16:10.613241 systemd-networkd[771]: eth0: DHCPv6 lease lost Sep 13 00:16:10.615972 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:16:10.617358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:16:10.620740 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:16:10.620801 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:16:10.635316 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:16:10.637757 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:16:10.637856 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:16:10.646384 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:16:10.651024 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:16:10.651262 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:16:10.664888 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:16:10.666295 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:16:10.724678 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:16:10.724892 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:16:10.729385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:16:10.729499 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:16:10.731854 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:16:10.731918 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:16:10.733984 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:16:10.734065 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:16:10.736476 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:16:10.736549 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:16:10.738330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:16:10.738399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:10.749379 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:16:10.750572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:16:10.750652 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:16:10.752720 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:16:10.752783 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:16:10.754775 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:16:10.754844 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:16:10.757035 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:16:10.757100 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:16:10.759356 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:16:10.759431 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:16:10.761654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:16:10.761716 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:16:10.763936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:16:10.763998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:10.767032 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:16:10.767204 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:16:10.769478 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:16:10.786519 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:16:10.795353 systemd[1]: Switching root. Sep 13 00:16:10.854079 systemd-journald[192]: Journal stopped Sep 13 00:16:13.186003 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 13 00:16:13.186086 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:16:13.186103 kernel: SELinux: policy capability open_perms=1 Sep 13 00:16:13.186125 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:16:13.186142 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:16:13.186176 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:16:13.186190 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:16:13.186212 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:16:13.186231 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:16:13.186251 kernel: audit: type=1403 audit(1757722571.983:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:16:13.186266 systemd[1]: Successfully loaded SELinux policy in 42.920ms. Sep 13 00:16:13.186289 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.236ms. Sep 13 00:16:13.186304 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:16:13.186319 systemd[1]: Detected virtualization kvm. Sep 13 00:16:13.186333 systemd[1]: Detected architecture x86-64. Sep 13 00:16:13.186357 systemd[1]: Detected first boot. Sep 13 00:16:13.186376 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:16:13.186390 zram_generator::config[1055]: No configuration found. Sep 13 00:16:13.186406 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:16:13.186420 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:16:13.186435 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:16:13.186450 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:16:13.186465 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:16:13.186479 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:16:13.186499 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:16:13.186519 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:16:13.186534 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:16:13.186548 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:16:13.186563 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:16:13.186577 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:16:13.186591 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:16:13.186610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:16:13.186624 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:16:13.186641 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:16:13.186656 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:16:13.186671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:16:13.186685 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:16:13.186699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:16:13.186714 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:16:13.186728 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:16:13.186742 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:16:13.186760 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:16:13.186775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:16:13.186789 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:16:13.186804 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:16:13.186820 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:16:13.186835 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:16:13.186850 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:16:13.186864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:16:13.186880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:16:13.186895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:16:13.186909 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:16:13.186924 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:16:13.186938 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:16:13.186953 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:16:13.186967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:13.186982 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:16:13.186996 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:16:13.187013 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:16:13.187028 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:16:13.187042 systemd[1]: Reached target machines.target - Containers. Sep 13 00:16:13.187057 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:16:13.187072 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:16:13.187087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:16:13.187101 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:16:13.187115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:16:13.187134 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:16:13.187162 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:16:13.187178 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:16:13.187192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:16:13.187207 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:16:13.187221 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:16:13.187236 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:16:13.187253 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:16:13.187269 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:16:13.187287 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:16:13.187322 systemd-journald[1118]: Collecting audit messages is disabled. Sep 13 00:16:13.187356 kernel: loop: module loaded Sep 13 00:16:13.187371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:16:13.187385 systemd-journald[1118]: Journal started Sep 13 00:16:13.187412 systemd-journald[1118]: Runtime Journal (/run/log/journal/1baa28fd8f424ebe8ec42685475a73a4) is 6.0M, max 48.3M, 42.2M free. Sep 13 00:16:12.828574 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:16:12.850607 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:16:12.851134 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:16:13.243180 kernel: fuse: init (API version 7.39) Sep 13 00:16:13.246169 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:16:13.250229 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:16:13.298481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:16:13.301509 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:16:13.301556 systemd[1]: Stopped verity-setup.service. Sep 13 00:16:13.301578 kernel: ACPI: bus type drm_connector registered Sep 13 00:16:13.338217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:13.341182 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:16:13.343064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:16:13.344306 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:16:13.345585 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:16:13.346716 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:16:13.347942 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:16:13.349227 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:16:13.350554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:16:13.352217 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:16:13.352457 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:16:13.353967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:16:13.354211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:16:13.355709 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:16:13.355923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:16:13.364740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:16:13.365020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:16:13.366785 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:16:13.367035 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:16:13.368648 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:16:13.368880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:16:13.370535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:16:13.372470 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:16:13.374282 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:16:13.391713 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:16:13.398633 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:16:13.413628 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:16:13.415141 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:16:13.415206 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:16:13.418670 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:16:13.422268 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:16:13.425551 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:16:13.427092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:16:13.433283 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:16:13.447483 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:16:13.451736 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:16:13.461188 systemd-journald[1118]: Time spent on flushing to /var/log/journal/1baa28fd8f424ebe8ec42685475a73a4 is 20.594ms for 990 entries. Sep 13 00:16:13.461188 systemd-journald[1118]: System Journal (/var/log/journal/1baa28fd8f424ebe8ec42685475a73a4) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:16:13.594827 systemd-journald[1118]: Received client request to flush runtime journal. Sep 13 00:16:13.594884 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:16:13.459490 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:16:13.461431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:16:13.471478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:16:13.483398 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:16:13.499724 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:16:13.503850 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:16:13.506029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:16:13.509616 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:16:13.511713 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:16:13.513797 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:16:13.531882 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:16:13.552672 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:16:13.554983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:16:13.566768 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Sep 13 00:16:13.566800 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Sep 13 00:16:13.592498 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:16:13.594100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:16:13.606694 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:16:13.609476 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:16:13.611598 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:16:13.613954 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:16:13.629490 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:16:14.000185 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:16:14.022614 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:16:14.067260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:16:14.114253 kernel: loop2: detected capacity change from 0 to 229808 Sep 13 00:16:14.130525 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 13 00:16:14.130546 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 13 00:16:14.137398 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:16:14.223196 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:16:14.312191 kernel: loop4: detected capacity change from 0 to 140768 Sep 13 00:16:14.324883 kernel: loop5: detected capacity change from 0 to 229808 Sep 13 00:16:14.332815 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:16:14.334223 (sd-merge)[1196]: Merged extensions into '/usr'. Sep 13 00:16:14.341892 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:16:14.342216 systemd[1]: Reloading... Sep 13 00:16:14.423283 zram_generator::config[1219]: No configuration found. Sep 13 00:16:14.778492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:14.850348 systemd[1]: Reloading finished in 507 ms. Sep 13 00:16:14.868318 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:16:14.889730 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:16:14.891557 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:16:14.893678 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:16:14.895771 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:16:14.942833 systemd[1]: Starting ensure-sysext.service... Sep 13 00:16:14.951389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:16:14.961888 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:16:14.962067 systemd[1]: Reloading... Sep 13 00:16:15.010778 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:16:15.011402 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:16:15.023863 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:16:15.024584 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 13 00:16:15.024821 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 13 00:16:15.030738 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:16:15.030960 systemd-tmpfiles[1262]: Skipping /boot Sep 13 00:16:15.059188 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:16:15.059221 systemd-tmpfiles[1262]: Skipping /boot Sep 13 00:16:15.130283 zram_generator::config[1289]: No configuration found. Sep 13 00:16:15.267683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:15.329125 systemd[1]: Reloading finished in 366 ms. Sep 13 00:16:15.371107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:16:15.389359 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:16:15.444393 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:16:15.486101 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:16:15.490784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:16:15.496355 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:16:15.500030 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:16:15.502186 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:16:15.513047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:16:15.517456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:16:15.523243 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:16:15.529165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:15.529472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:16:15.547052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:16:15.553343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:16:15.560320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:16:15.562492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:16:15.562716 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:15.565585 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:16:15.568872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:16:15.570992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:16:15.571240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:16:15.573330 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:16:15.573746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:16:15.575877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:16:15.576222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:16:15.584477 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:16:15.595696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:15.595974 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:16:15.608424 augenrules[1362]: No rules Sep 13 00:16:15.609554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:16:15.614471 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:16:15.616064 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Sep 13 00:16:15.618546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:16:15.626507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:16:15.627884 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:16:15.628122 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:16:15.631377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:15.632734 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:16:15.635118 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:16:15.637230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:16:15.637488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:16:15.639649 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:16:15.639877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:16:15.641773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:16:15.641980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:16:15.644233 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:16:15.644465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:16:15.649624 systemd[1]: Finished ensure-sysext.service. Sep 13 00:16:15.657476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:16:15.657576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:16:15.668416 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:16:15.670440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:16:15.688393 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:16:15.759071 systemd-resolved[1338]: Positive Trust Anchors: Sep 13 00:16:15.759099 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:16:15.759145 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:16:15.766977 systemd-resolved[1338]: Defaulting to hostname 'linux'. Sep 13 00:16:15.769884 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:16:15.771766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:16:15.790577 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:16:15.813264 systemd-networkd[1396]: lo: Link UP Sep 13 00:16:15.813304 systemd-networkd[1396]: lo: Gained carrier Sep 13 00:16:15.818758 systemd-networkd[1396]: Enumeration completed Sep 13 00:16:15.818923 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:16:15.848884 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1390) Sep 13 00:16:15.819313 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:15.819318 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:16:15.846042 systemd[1]: Reached target network.target - Network. Sep 13 00:16:15.854983 systemd-networkd[1396]: eth0: Link UP Sep 13 00:16:15.855143 systemd-networkd[1396]: eth0: Gained carrier Sep 13 00:16:15.855194 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:15.901327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:16:15.908391 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:16:15.913516 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:16:15.920211 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:16:15.921998 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 13 00:16:16.905411 systemd-resolved[1338]: Clock change detected. Flushing caches. Sep 13 00:16:16.905516 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:16:16.905641 systemd-timesyncd[1385]: Initial clock synchronization to Sat 2025-09-13 00:16:16.905365 UTC. Sep 13 00:16:16.911053 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:16:16.929570 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:16:16.931776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:16:16.934838 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:16.952717 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:16:16.966448 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:16:17.037713 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:16:17.090741 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:16:17.091081 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:16:17.091591 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:16:17.092904 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:16:17.093211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:17.102882 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:16:17.168045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:17.340042 kernel: kvm_amd: TSC scaling supported Sep 13 00:16:17.340167 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:16:17.340182 kernel: kvm_amd: Nested Paging enabled Sep 13 00:16:17.340968 kernel: kvm_amd: LBR virtualization supported Sep 13 00:16:17.340987 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:16:17.341961 kernel: kvm_amd: Virtual GIF supported Sep 13 00:16:17.362592 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:16:17.396350 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:16:17.415051 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:16:17.428291 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:16:17.470972 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:16:17.481536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:16:17.482825 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:16:17.484121 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:16:17.485439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:16:17.487045 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:16:17.488285 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:16:17.489819 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:16:17.491147 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:16:17.491184 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:16:17.492171 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:16:17.494325 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:16:17.497323 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:16:17.511412 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:16:17.514035 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:16:17.515616 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:16:17.516751 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:16:17.517688 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:16:17.518618 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:16:17.518658 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:16:17.519697 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:16:17.521788 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:16:17.524685 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:16:17.525717 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:16:17.529643 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:16:17.538891 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:16:17.541693 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:16:17.548139 jq[1435]: false Sep 13 00:16:17.552739 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:16:17.558762 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:16:17.560355 extend-filesystems[1436]: Found loop3 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found loop4 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found loop5 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found sr0 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda1 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda2 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda3 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found usr Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda4 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda6 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda7 Sep 13 00:16:17.563921 extend-filesystems[1436]: Found vda9 Sep 13 00:16:17.563921 extend-filesystems[1436]: Checking size of /dev/vda9 Sep 13 00:16:17.564127 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:16:17.570675 dbus-daemon[1434]: [system] SELinux support is enabled Sep 13 00:16:17.570889 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:16:17.573042 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:16:17.574989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:16:17.578846 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:16:17.581890 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:16:17.582838 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:16:17.588314 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:16:17.588960 extend-filesystems[1436]: Resized partition /dev/vda9 Sep 13 00:16:17.613640 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1398) Sep 13 00:16:17.613773 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:16:17.610924 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:16:17.627102 update_engine[1450]: I20250913 00:16:17.617185 1450 main.cc:92] Flatcar Update Engine starting Sep 13 00:16:17.627102 update_engine[1450]: I20250913 00:16:17.624308 1450 update_check_scheduler.cc:74] Next update check in 7m25s Sep 13 00:16:17.611284 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:16:17.622307 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:16:17.623231 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:16:17.633409 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:16:17.633755 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:16:17.634541 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:16:17.642015 jq[1453]: true Sep 13 00:16:17.650017 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:16:17.651391 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:16:17.651422 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:16:17.652866 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:16:17.652887 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:16:17.655698 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:16:17.656934 jq[1467]: true Sep 13 00:16:17.664576 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:16:17.782579 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:16:17.782611 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:16:17.782999 systemd-logind[1446]: New seat seat0. Sep 13 00:16:17.786713 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:16:17.838334 tar[1456]: linux-amd64/LICENSE Sep 13 00:16:17.853006 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:16:17.874475 tar[1456]: linux-amd64/helm Sep 13 00:16:17.904745 systemd-networkd[1396]: eth0: Gained IPv6LL Sep 13 00:16:17.909277 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:16:17.912216 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:16:17.918784 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:16:18.004815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:18.012668 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:16:18.096740 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:16:18.131262 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:16:18.131262 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:16:18.131262 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:16:18.142116 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Sep 13 00:16:18.133315 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:16:18.143620 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:16:18.134387 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:16:18.147527 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:16:18.149997 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:16:18.152734 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:16:18.160905 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:16:18.178362 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:16:18.185285 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:16:18.246474 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:16:18.247598 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:16:18.251013 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:16:18.251317 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:16:18.256174 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:16:18.646306 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:16:18.676316 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:16:18.707040 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:16:18.862945 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:16:18.866009 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:16:18.868402 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:16:18.881762 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:38028.service - OpenSSH per-connection server daemon (10.0.0.1:38028). Sep 13 00:16:18.896945 containerd[1462]: time="2025-09-13T00:16:18.896818624Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:16:18.940447 containerd[1462]: time="2025-09-13T00:16:18.939194294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942136 containerd[1462]: time="2025-09-13T00:16:18.942098941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942136 containerd[1462]: time="2025-09-13T00:16:18.942127445Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:16:18.942136 containerd[1462]: time="2025-09-13T00:16:18.942144707Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:16:18.942398 containerd[1462]: time="2025-09-13T00:16:18.942368527Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:16:18.942398 containerd[1462]: time="2025-09-13T00:16:18.942390718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942703 containerd[1462]: time="2025-09-13T00:16:18.942475888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942703 containerd[1462]: time="2025-09-13T00:16:18.942507067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942998 containerd[1462]: time="2025-09-13T00:16:18.942828069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942998 containerd[1462]: time="2025-09-13T00:16:18.942850360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942998 containerd[1462]: time="2025-09-13T00:16:18.942864858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:16:18.942998 containerd[1462]: time="2025-09-13T00:16:18.942875257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:16:18.943109 containerd[1462]: time="2025-09-13T00:16:18.943035968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:16:18.943348 containerd[1462]: time="2025-09-13T00:16:18.943323508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:16:18.943602 containerd[1462]: time="2025-09-13T00:16:18.943459443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:16:18.943602 containerd[1462]: time="2025-09-13T00:16:18.943477276Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:16:18.943667 containerd[1462]: time="2025-09-13T00:16:18.943622709Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:16:18.943722 containerd[1462]: time="2025-09-13T00:16:18.943691748Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:16:18.960241 containerd[1462]: time="2025-09-13T00:16:18.959931904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:16:18.960241 containerd[1462]: time="2025-09-13T00:16:18.960038013Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:16:18.960241 containerd[1462]: time="2025-09-13T00:16:18.960067889Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:16:18.960241 containerd[1462]: time="2025-09-13T00:16:18.960090221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:16:18.960241 containerd[1462]: time="2025-09-13T00:16:18.960119136Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:16:18.960597 containerd[1462]: time="2025-09-13T00:16:18.960422835Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:16:18.960982 containerd[1462]: time="2025-09-13T00:16:18.960943562Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:16:18.961132 containerd[1462]: time="2025-09-13T00:16:18.961084666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:16:18.961132 containerd[1462]: time="2025-09-13T00:16:18.961109373Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:16:18.961132 containerd[1462]: time="2025-09-13T00:16:18.961125032Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:16:18.961132 containerd[1462]: time="2025-09-13T00:16:18.961138036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961152533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961172992Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961188281Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961202297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961220561Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961232704Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961247872Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:16:18.961282 containerd[1462]: time="2025-09-13T00:16:18.961277939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961293398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961308025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961320098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961332030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961344674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961356997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961369080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961381673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961395990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961414515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961427509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961441525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961456834Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961476100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.961484 containerd[1462]: time="2025-09-13T00:16:18.961488043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961499284Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961575907Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961595925Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961617195Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961628997Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961638595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961651078Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961662760Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:16:18.962355 containerd[1462]: time="2025-09-13T00:16:18.961673961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:16:18.962649 containerd[1462]: time="2025-09-13T00:16:18.962172316Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:16:18.962649 containerd[1462]: time="2025-09-13T00:16:18.962260672Z" level=info msg="Connect containerd service" Sep 13 00:16:18.962649 containerd[1462]: time="2025-09-13T00:16:18.962318941Z" level=info msg="using legacy CRI server" Sep 13 00:16:18.962649 containerd[1462]: time="2025-09-13T00:16:18.962329050Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:16:18.962649 containerd[1462]: time="2025-09-13T00:16:18.962445929Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:16:18.963469 containerd[1462]: time="2025-09-13T00:16:18.963425336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:16:18.963676 containerd[1462]: time="2025-09-13T00:16:18.963615803Z" level=info msg="Start subscribing containerd event" Sep 13 00:16:18.963726 containerd[1462]: time="2025-09-13T00:16:18.963690082Z" level=info msg="Start recovering state" Sep 13 00:16:18.964069 containerd[1462]: time="2025-09-13T00:16:18.963772387Z" level=info msg="Start event monitor" Sep 13 00:16:18.964069 containerd[1462]: time="2025-09-13T00:16:18.963803395Z" level=info msg="Start snapshots syncer" Sep 13 00:16:18.964069 containerd[1462]: time="2025-09-13T00:16:18.963814716Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:16:18.964069 containerd[1462]: time="2025-09-13T00:16:18.963840534Z" level=info msg="Start streaming server" Sep 13 00:16:18.964427 containerd[1462]: time="2025-09-13T00:16:18.964387470Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:16:18.964492 containerd[1462]: time="2025-09-13T00:16:18.964465276Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:16:18.964742 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:16:18.965738 containerd[1462]: time="2025-09-13T00:16:18.965507821Z" level=info msg="containerd successfully booted in 0.070541s" Sep 13 00:16:18.981577 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 38028 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:18.983573 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:18.997780 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:16:19.066376 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:16:19.072562 systemd-logind[1446]: New session 1 of user core. Sep 13 00:16:19.091006 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:16:19.111595 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:16:19.137312 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:16:19.192320 tar[1456]: linux-amd64/README.md Sep 13 00:16:19.259399 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:16:19.397697 systemd[1543]: Queued start job for default target default.target. Sep 13 00:16:19.409180 systemd[1543]: Created slice app.slice - User Application Slice. Sep 13 00:16:19.409226 systemd[1543]: Reached target paths.target - Paths. Sep 13 00:16:19.409246 systemd[1543]: Reached target timers.target - Timers. Sep 13 00:16:19.411728 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:16:19.454308 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:16:19.454517 systemd[1543]: Reached target sockets.target - Sockets. Sep 13 00:16:19.454538 systemd[1543]: Reached target basic.target - Basic System. Sep 13 00:16:19.454624 systemd[1543]: Reached target default.target - Main User Target. Sep 13 00:16:19.454676 systemd[1543]: Startup finished in 289ms. Sep 13 00:16:19.455725 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:16:19.471911 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:16:19.581769 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:38048.service - OpenSSH per-connection server daemon (10.0.0.1:38048). Sep 13 00:16:19.684790 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 38048 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:19.687513 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:19.709790 systemd-logind[1446]: New session 2 of user core. Sep 13 00:16:19.724338 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:16:19.803002 sshd[1557]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:19.825803 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:38048.service: Deactivated successfully. Sep 13 00:16:19.828193 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:16:19.830910 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:16:19.832399 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:38064.service - OpenSSH per-connection server daemon (10.0.0.1:38064). Sep 13 00:16:19.835039 systemd-logind[1446]: Removed session 2. Sep 13 00:16:19.888175 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 38064 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:19.953477 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:19.962479 systemd-logind[1446]: New session 3 of user core. Sep 13 00:16:19.975151 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:16:20.045671 sshd[1564]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:20.051733 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:38064.service: Deactivated successfully. Sep 13 00:16:20.055283 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:16:20.089263 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:16:20.090802 systemd-logind[1446]: Removed session 3. Sep 13 00:16:20.531287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:20.555504 (kubelet)[1575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:20.557183 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:16:20.559733 systemd[1]: Startup finished in 1.142s (kernel) + 8.237s (initrd) + 7.636s (userspace) = 17.016s. Sep 13 00:16:21.432017 kubelet[1575]: E0913 00:16:21.431928 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:21.437721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:21.437949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:21.438488 systemd[1]: kubelet.service: Consumed 2.723s CPU time. Sep 13 00:16:30.059119 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:43672.service - OpenSSH per-connection server daemon (10.0.0.1:43672). Sep 13 00:16:30.106788 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 43672 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:30.109252 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:30.115733 systemd-logind[1446]: New session 4 of user core. Sep 13 00:16:30.122820 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:16:30.184985 sshd[1588]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:30.197242 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:43672.service: Deactivated successfully. Sep 13 00:16:30.199332 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:16:30.201220 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:16:30.217255 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:43686.service - OpenSSH per-connection server daemon (10.0.0.1:43686). Sep 13 00:16:30.218724 systemd-logind[1446]: Removed session 4. Sep 13 00:16:30.254116 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 43686 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:30.255985 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:30.260523 systemd-logind[1446]: New session 5 of user core. Sep 13 00:16:30.271908 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:16:30.325709 sshd[1595]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:30.337466 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:43686.service: Deactivated successfully. Sep 13 00:16:30.340161 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:16:30.343041 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:16:30.355037 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:43688.service - OpenSSH per-connection server daemon (10.0.0.1:43688). Sep 13 00:16:30.356199 systemd-logind[1446]: Removed session 5. Sep 13 00:16:30.394105 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 43688 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:30.396178 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:30.401501 systemd-logind[1446]: New session 6 of user core. Sep 13 00:16:30.410859 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:16:30.471740 sshd[1602]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:30.484740 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:43688.service: Deactivated successfully. Sep 13 00:16:30.486999 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:16:30.488915 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:16:30.490740 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:43704.service - OpenSSH per-connection server daemon (10.0.0.1:43704). Sep 13 00:16:30.491756 systemd-logind[1446]: Removed session 6. Sep 13 00:16:30.533517 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 43704 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:30.535488 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:30.539971 systemd-logind[1446]: New session 7 of user core. Sep 13 00:16:30.549776 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:16:30.612868 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:16:30.613253 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:30.633986 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:30.636511 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:30.651083 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:43704.service: Deactivated successfully. Sep 13 00:16:30.653129 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:16:30.654878 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:16:30.666920 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:43714.service - OpenSSH per-connection server daemon (10.0.0.1:43714). Sep 13 00:16:30.668009 systemd-logind[1446]: Removed session 7. Sep 13 00:16:30.707627 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 43714 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:30.710342 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:30.716534 systemd-logind[1446]: New session 8 of user core. Sep 13 00:16:30.729940 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:16:30.788571 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:16:30.788945 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:30.793399 sudo[1621]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:30.801467 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:16:30.801937 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:30.823187 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:16:30.825450 auditctl[1624]: No rules Sep 13 00:16:30.826996 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:16:30.827275 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:16:30.830292 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:16:30.867956 augenrules[1642]: No rules Sep 13 00:16:30.870211 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:16:30.871838 sudo[1620]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:30.874150 sshd[1617]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:30.886460 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:43714.service: Deactivated successfully. Sep 13 00:16:30.889514 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:16:30.891712 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:16:30.906112 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:43718.service - OpenSSH per-connection server daemon (10.0.0.1:43718). Sep 13 00:16:30.907575 systemd-logind[1446]: Removed session 8. Sep 13 00:16:30.946395 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 43718 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:30.948470 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:30.954606 systemd-logind[1446]: New session 9 of user core. Sep 13 00:16:30.963817 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:16:31.022091 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:16:31.022722 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:31.511038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:16:31.519164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:31.822655 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:16:31.827640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:31.829319 (dockerd)[1678]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:16:31.833710 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:31.957462 kubelet[1680]: E0913 00:16:31.956066 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:31.965954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:31.966365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:32.514001 dockerd[1678]: time="2025-09-13T00:16:32.513915542Z" level=info msg="Starting up" Sep 13 00:16:33.055561 dockerd[1678]: time="2025-09-13T00:16:33.055451395Z" level=info msg="Loading containers: start." Sep 13 00:16:33.213591 kernel: Initializing XFRM netlink socket Sep 13 00:16:33.348209 systemd-networkd[1396]: docker0: Link UP Sep 13 00:16:33.378884 dockerd[1678]: time="2025-09-13T00:16:33.378783127Z" level=info msg="Loading containers: done." Sep 13 00:16:33.405674 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3490994282-merged.mount: Deactivated successfully. Sep 13 00:16:33.544794 dockerd[1678]: time="2025-09-13T00:16:33.544691234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:16:33.545402 dockerd[1678]: time="2025-09-13T00:16:33.544870611Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:16:33.545402 dockerd[1678]: time="2025-09-13T00:16:33.545047292Z" level=info msg="Daemon has completed initialization" Sep 13 00:16:34.126565 dockerd[1678]: time="2025-09-13T00:16:34.126430603Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:16:34.126843 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:16:35.976930 containerd[1462]: time="2025-09-13T00:16:35.976851984Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:16:36.774354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718654162.mount: Deactivated successfully. Sep 13 00:16:38.569947 containerd[1462]: time="2025-09-13T00:16:38.569826416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:38.595291 containerd[1462]: time="2025-09-13T00:16:38.595175926Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 00:16:38.634805 containerd[1462]: time="2025-09-13T00:16:38.634697623Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:38.685397 containerd[1462]: time="2025-09-13T00:16:38.685316822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:38.686912 containerd[1462]: time="2025-09-13T00:16:38.686862381Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.70994296s" Sep 13 00:16:38.686912 containerd[1462]: time="2025-09-13T00:16:38.686906123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:16:38.687687 containerd[1462]: time="2025-09-13T00:16:38.687659746Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:16:41.202118 containerd[1462]: time="2025-09-13T00:16:41.201860199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:41.202998 containerd[1462]: time="2025-09-13T00:16:41.202895330Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 00:16:41.204531 containerd[1462]: time="2025-09-13T00:16:41.204472548Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:41.208365 containerd[1462]: time="2025-09-13T00:16:41.208301519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:41.209635 containerd[1462]: time="2025-09-13T00:16:41.209586849Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.521896196s" Sep 13 00:16:41.209711 containerd[1462]: time="2025-09-13T00:16:41.209638306Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:16:41.210405 containerd[1462]: time="2025-09-13T00:16:41.210368565Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:16:42.010945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:16:42.059958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:42.297572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:42.304447 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:42.536621 kubelet[1907]: E0913 00:16:42.536517 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:42.541766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:42.542027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:43.462991 containerd[1462]: time="2025-09-13T00:16:43.462882172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:43.464696 containerd[1462]: time="2025-09-13T00:16:43.464606757Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 00:16:43.466950 containerd[1462]: time="2025-09-13T00:16:43.466880861Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:43.471168 containerd[1462]: time="2025-09-13T00:16:43.471105013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:43.472637 containerd[1462]: time="2025-09-13T00:16:43.472552909Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.261906933s" Sep 13 00:16:43.472637 containerd[1462]: time="2025-09-13T00:16:43.472628250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:16:43.473349 containerd[1462]: time="2025-09-13T00:16:43.473311000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:16:45.585258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390051862.mount: Deactivated successfully. Sep 13 00:16:45.920912 containerd[1462]: time="2025-09-13T00:16:45.920730676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:45.921801 containerd[1462]: time="2025-09-13T00:16:45.921732414Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 00:16:45.922809 containerd[1462]: time="2025-09-13T00:16:45.922765852Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:45.925043 containerd[1462]: time="2025-09-13T00:16:45.925001435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:45.925780 containerd[1462]: time="2025-09-13T00:16:45.925744268Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.452394866s" Sep 13 00:16:45.925846 containerd[1462]: time="2025-09-13T00:16:45.925781658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:16:45.926393 containerd[1462]: time="2025-09-13T00:16:45.926358600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:16:46.530298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642722834.mount: Deactivated successfully. Sep 13 00:16:48.268033 containerd[1462]: time="2025-09-13T00:16:48.267920855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:48.290049 containerd[1462]: time="2025-09-13T00:16:48.289907268Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 00:16:48.299050 containerd[1462]: time="2025-09-13T00:16:48.298936260Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:48.311627 containerd[1462]: time="2025-09-13T00:16:48.311487478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:48.313252 containerd[1462]: time="2025-09-13T00:16:48.313198597Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.386806464s" Sep 13 00:16:48.313252 containerd[1462]: time="2025-09-13T00:16:48.313239544Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:16:48.314176 containerd[1462]: time="2025-09-13T00:16:48.313963091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:16:49.752325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908384363.mount: Deactivated successfully. Sep 13 00:16:49.762104 containerd[1462]: time="2025-09-13T00:16:49.762021728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:49.762987 containerd[1462]: time="2025-09-13T00:16:49.762861583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:16:49.764402 containerd[1462]: time="2025-09-13T00:16:49.764338492Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:49.767119 containerd[1462]: time="2025-09-13T00:16:49.767056800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:49.768095 containerd[1462]: time="2025-09-13T00:16:49.768037980Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.454037309s" Sep 13 00:16:49.768095 containerd[1462]: time="2025-09-13T00:16:49.768090298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:16:49.768852 containerd[1462]: time="2025-09-13T00:16:49.768809497Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:16:50.676978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639472265.mount: Deactivated successfully. Sep 13 00:16:52.761235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:16:52.774411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:53.092433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:53.105103 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:53.226145 kubelet[2043]: E0913 00:16:53.226057 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:53.231798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:53.232052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:54.985733 containerd[1462]: time="2025-09-13T00:16:54.984909111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:54.986343 containerd[1462]: time="2025-09-13T00:16:54.985865626Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 00:16:54.987703 containerd[1462]: time="2025-09-13T00:16:54.987657763Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:54.992646 containerd[1462]: time="2025-09-13T00:16:54.992516374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:54.994589 containerd[1462]: time="2025-09-13T00:16:54.994513576Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.225665385s" Sep 13 00:16:54.994589 containerd[1462]: time="2025-09-13T00:16:54.994567830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:16:58.545538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:58.557009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:58.591887 systemd[1]: Reloading requested from client PID 2087 ('systemctl') (unit session-9.scope)... Sep 13 00:16:58.591923 systemd[1]: Reloading... Sep 13 00:16:58.703624 zram_generator::config[2126]: No configuration found. Sep 13 00:16:59.628067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:59.726615 systemd[1]: Reloading finished in 1134 ms. Sep 13 00:16:59.778376 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:16:59.778502 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:16:59.778809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:59.781575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:59.975742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:59.981360 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:17:00.522912 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:00.522912 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:17:00.522912 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:00.523422 kubelet[2175]: I0913 00:17:00.522997 2175 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:17:00.972718 kubelet[2175]: I0913 00:17:00.972585 2175 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:17:00.972718 kubelet[2175]: I0913 00:17:00.972619 2175 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:17:00.973286 kubelet[2175]: I0913 00:17:00.973252 2175 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:17:01.064882 kubelet[2175]: I0913 00:17:01.064815 2175 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:17:01.074676 kubelet[2175]: E0913 00:17:01.074637 2175 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:17:01.092342 kubelet[2175]: E0913 00:17:01.092251 2175 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:17:01.092342 kubelet[2175]: I0913 00:17:01.092324 2175 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:17:01.099645 kubelet[2175]: I0913 00:17:01.099599 2175 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:17:01.100007 kubelet[2175]: I0913 00:17:01.099959 2175 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:17:01.100198 kubelet[2175]: I0913 00:17:01.099992 2175 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:17:01.100304 kubelet[2175]: I0913 00:17:01.100208 2175 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:17:01.100304 kubelet[2175]: I0913 00:17:01.100220 2175 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:17:01.121530 kubelet[2175]: I0913 00:17:01.121457 2175 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:01.125146 kubelet[2175]: I0913 00:17:01.125116 2175 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:17:01.125643 kubelet[2175]: I0913 00:17:01.125619 2175 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:17:01.126665 kubelet[2175]: I0913 00:17:01.126636 2175 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:17:01.126703 kubelet[2175]: I0913 00:17:01.126672 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:17:01.151905 kubelet[2175]: E0913 00:17:01.151813 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:17:01.153997 kubelet[2175]: I0913 00:17:01.152886 2175 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:17:01.153997 kubelet[2175]: I0913 00:17:01.153820 2175 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:17:01.155594 kubelet[2175]: E0913 00:17:01.155529 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:17:01.155741 kubelet[2175]: W0913 00:17:01.155700 2175 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:17:01.160328 kubelet[2175]: I0913 00:17:01.160263 2175 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:17:01.160456 kubelet[2175]: I0913 00:17:01.160349 2175 server.go:1289] "Started kubelet" Sep 13 00:17:01.160792 kubelet[2175]: I0913 00:17:01.160516 2175 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:17:01.163571 kubelet[2175]: I0913 00:17:01.163493 2175 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:17:01.167297 kubelet[2175]: I0913 00:17:01.167210 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:17:01.168142 kubelet[2175]: I0913 00:17:01.168118 2175 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:17:01.172633 kubelet[2175]: E0913 00:17:01.172594 2175 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:17:01.172633 kubelet[2175]: I0913 00:17:01.172641 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:17:01.172853 kubelet[2175]: I0913 00:17:01.172761 2175 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:17:01.175237 kubelet[2175]: I0913 00:17:01.172888 2175 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:17:01.175237 kubelet[2175]: E0913 00:17:01.172141 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af693fedaabb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:17:01.160303291 +0000 UTC m=+1.173351501,LastTimestamp:2025-09-13 00:17:01.160303291 +0000 UTC m=+1.173351501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:17:01.175237 kubelet[2175]: E0913 00:17:01.174747 2175 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:01.175237 kubelet[2175]: I0913 00:17:01.175012 2175 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:17:01.175237 kubelet[2175]: I0913 00:17:01.175041 2175 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:17:01.175237 kubelet[2175]: E0913 00:17:01.175160 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:17:01.175526 kubelet[2175]: E0913 00:17:01.175463 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" Sep 13 00:17:01.176032 kubelet[2175]: I0913 00:17:01.176001 2175 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:17:01.176139 kubelet[2175]: I0913 00:17:01.176116 2175 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:17:01.178177 kubelet[2175]: I0913 00:17:01.178143 2175 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:17:01.186781 kubelet[2175]: I0913 00:17:01.186671 2175 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:17:01.196750 kubelet[2175]: I0913 00:17:01.196712 2175 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:17:01.196750 kubelet[2175]: I0913 00:17:01.196737 2175 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:17:01.196935 kubelet[2175]: I0913 00:17:01.196766 2175 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:01.200434 kubelet[2175]: I0913 00:17:01.200393 2175 policy_none.go:49] "None policy: Start" Sep 13 00:17:01.200434 kubelet[2175]: I0913 00:17:01.200426 2175 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:17:01.200588 kubelet[2175]: I0913 00:17:01.200447 2175 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:17:01.209641 kubelet[2175]: I0913 00:17:01.209578 2175 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:17:01.209641 kubelet[2175]: I0913 00:17:01.209635 2175 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:17:01.209868 kubelet[2175]: I0913 00:17:01.209700 2175 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:17:01.209868 kubelet[2175]: I0913 00:17:01.209716 2175 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:17:01.209868 kubelet[2175]: E0913 00:17:01.209789 2175 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:17:01.209913 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:17:01.226353 kubelet[2175]: E0913 00:17:01.210616 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:17:01.230934 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:17:01.234491 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:17:01.246662 kubelet[2175]: E0913 00:17:01.246613 2175 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:17:01.246931 kubelet[2175]: I0913 00:17:01.246908 2175 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:17:01.247155 kubelet[2175]: I0913 00:17:01.246940 2175 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:17:01.247328 kubelet[2175]: I0913 00:17:01.247256 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:17:01.248156 kubelet[2175]: E0913 00:17:01.248113 2175 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:17:01.248290 kubelet[2175]: E0913 00:17:01.248189 2175 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:17:01.321658 systemd[1]: Created slice kubepods-burstable-pod3322793d0626e63e0834b407d927d4b3.slice - libcontainer container kubepods-burstable-pod3322793d0626e63e0834b407d927d4b3.slice. Sep 13 00:17:01.333564 kubelet[2175]: E0913 00:17:01.333494 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:01.337098 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 00:17:01.339110 kubelet[2175]: E0913 00:17:01.339063 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:01.342270 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 00:17:01.344362 kubelet[2175]: E0913 00:17:01.344338 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:01.348963 kubelet[2175]: I0913 00:17:01.348912 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:17:01.349426 kubelet[2175]: E0913 00:17:01.349381 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Sep 13 00:17:01.376358 kubelet[2175]: E0913 00:17:01.376281 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" Sep 13 00:17:01.477018 kubelet[2175]: I0913 00:17:01.476810 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3322793d0626e63e0834b407d927d4b3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3322793d0626e63e0834b407d927d4b3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:01.477018 kubelet[2175]: I0913 00:17:01.476878 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3322793d0626e63e0834b407d927d4b3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3322793d0626e63e0834b407d927d4b3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:01.477018 kubelet[2175]: I0913 00:17:01.476917 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:01.477018 kubelet[2175]: I0913 00:17:01.476944 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:01.477018 kubelet[2175]: I0913 00:17:01.476968 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:01.477282 kubelet[2175]: I0913 00:17:01.477018 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:01.477282 kubelet[2175]: I0913 00:17:01.477047 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:01.477282 kubelet[2175]: I0913 00:17:01.477069 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3322793d0626e63e0834b407d927d4b3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3322793d0626e63e0834b407d927d4b3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:01.477282 kubelet[2175]: I0913 00:17:01.477091 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:01.552084 kubelet[2175]: I0913 00:17:01.552037 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:17:01.552651 kubelet[2175]: E0913 00:17:01.552532 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Sep 13 00:17:01.635263 kubelet[2175]: E0913 00:17:01.635186 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:01.636174 containerd[1462]: time="2025-09-13T00:17:01.636110676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3322793d0626e63e0834b407d927d4b3,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:01.640437 kubelet[2175]: E0913 00:17:01.640407 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:01.641076 containerd[1462]: time="2025-09-13T00:17:01.641028824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:01.645373 kubelet[2175]: E0913 00:17:01.645315 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:01.645931 containerd[1462]: time="2025-09-13T00:17:01.645874845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:01.778091 kubelet[2175]: E0913 00:17:01.778016 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" Sep 13 00:17:01.956054 kubelet[2175]: I0913 00:17:01.956006 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:17:01.956572 kubelet[2175]: E0913 00:17:01.956498 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Sep 13 00:17:02.031222 kubelet[2175]: E0913 00:17:02.031042 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:17:02.270233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409373902.mount: Deactivated successfully. Sep 13 00:17:02.277014 containerd[1462]: time="2025-09-13T00:17:02.276929972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:02.278918 containerd[1462]: time="2025-09-13T00:17:02.278865662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:17:02.280393 kubelet[2175]: E0913 00:17:02.280324 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:17:02.280493 containerd[1462]: time="2025-09-13T00:17:02.280354932Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:02.281784 containerd[1462]: time="2025-09-13T00:17:02.281667999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:02.283044 containerd[1462]: time="2025-09-13T00:17:02.282694309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:17:02.283890 containerd[1462]: time="2025-09-13T00:17:02.283794240Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:02.285015 containerd[1462]: time="2025-09-13T00:17:02.284973753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:17:02.288872 containerd[1462]: time="2025-09-13T00:17:02.288818402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:02.289985 containerd[1462]: time="2025-09-13T00:17:02.289929143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.93927ms" Sep 13 00:17:02.290890 containerd[1462]: time="2025-09-13T00:17:02.290854633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.637545ms" Sep 13 00:17:02.294014 containerd[1462]: time="2025-09-13T00:17:02.293960547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.85084ms" Sep 13 00:17:02.341343 kubelet[2175]: E0913 00:17:02.341264 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:17:02.547721 update_engine[1450]: I20250913 00:17:02.546619 1450 update_attempter.cc:509] Updating boot flags... Sep 13 00:17:02.550390 containerd[1462]: time="2025-09-13T00:17:02.550010624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:02.550390 containerd[1462]: time="2025-09-13T00:17:02.550105275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:02.550390 containerd[1462]: time="2025-09-13T00:17:02.550126184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:02.551348 containerd[1462]: time="2025-09-13T00:17:02.551134791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:02.554965 containerd[1462]: time="2025-09-13T00:17:02.554652359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:02.554965 containerd[1462]: time="2025-09-13T00:17:02.554734715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:02.554965 containerd[1462]: time="2025-09-13T00:17:02.554750775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:02.554965 containerd[1462]: time="2025-09-13T00:17:02.554877868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:02.560589 containerd[1462]: time="2025-09-13T00:17:02.560171711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:02.560589 containerd[1462]: time="2025-09-13T00:17:02.560250471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:02.560589 containerd[1462]: time="2025-09-13T00:17:02.560264417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:02.560589 containerd[1462]: time="2025-09-13T00:17:02.560435031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:02.580658 kubelet[2175]: E0913 00:17:02.579856 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="1.6s" Sep 13 00:17:02.601590 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2288) Sep 13 00:17:02.611700 systemd[1]: Started cri-containerd-924e8a80c92603e0dca14738dee09b35fc39673a0fd8358aff649325f67db7fe.scope - libcontainer container 924e8a80c92603e0dca14738dee09b35fc39673a0fd8358aff649325f67db7fe. Sep 13 00:17:02.660761 systemd[1]: Started cri-containerd-76ede12d082e7391ba86f2c55aa89e3ae43d98e88b5ae273e1dc58161b5c17ff.scope - libcontainer container 76ede12d082e7391ba86f2c55aa89e3ae43d98e88b5ae273e1dc58161b5c17ff. Sep 13 00:17:02.668904 systemd[1]: Started cri-containerd-206badaadb607c530ec37c5acd656b314a19292263f8b4186043d41b654c73a9.scope - libcontainer container 206badaadb607c530ec37c5acd656b314a19292263f8b4186043d41b654c73a9. Sep 13 00:17:02.705613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2288) Sep 13 00:17:02.768334 kubelet[2175]: I0913 00:17:02.768154 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:17:02.769297 kubelet[2175]: E0913 00:17:02.769261 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Sep 13 00:17:02.770944 kubelet[2175]: E0913 00:17:02.770896 2175 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:17:02.804034 containerd[1462]: time="2025-09-13T00:17:02.803771310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"206badaadb607c530ec37c5acd656b314a19292263f8b4186043d41b654c73a9\"" Sep 13 00:17:02.806914 kubelet[2175]: E0913 00:17:02.806838 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:02.808951 containerd[1462]: time="2025-09-13T00:17:02.808856447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3322793d0626e63e0834b407d927d4b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"924e8a80c92603e0dca14738dee09b35fc39673a0fd8358aff649325f67db7fe\"" Sep 13 00:17:02.810074 kubelet[2175]: E0913 00:17:02.809945 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:02.820005 containerd[1462]: time="2025-09-13T00:17:02.819946249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"76ede12d082e7391ba86f2c55aa89e3ae43d98e88b5ae273e1dc58161b5c17ff\"" Sep 13 00:17:02.820692 kubelet[2175]: E0913 00:17:02.820663 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:02.896209 containerd[1462]: time="2025-09-13T00:17:02.896144903Z" level=info msg="CreateContainer within sandbox \"206badaadb607c530ec37c5acd656b314a19292263f8b4186043d41b654c73a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:17:02.899691 containerd[1462]: time="2025-09-13T00:17:02.899637102Z" level=info msg="CreateContainer within sandbox \"924e8a80c92603e0dca14738dee09b35fc39673a0fd8358aff649325f67db7fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:17:03.071292 containerd[1462]: time="2025-09-13T00:17:03.071129663Z" level=info msg="CreateContainer within sandbox \"76ede12d082e7391ba86f2c55aa89e3ae43d98e88b5ae273e1dc58161b5c17ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:17:03.093986 kubelet[2175]: E0913 00:17:03.093931 2175 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:17:03.098203 containerd[1462]: time="2025-09-13T00:17:03.098151789Z" level=info msg="CreateContainer within sandbox \"206badaadb607c530ec37c5acd656b314a19292263f8b4186043d41b654c73a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"486306c121f0d259faee3176e2368222084fac91d9d5f04f6357aa855d042787\"" Sep 13 00:17:03.099023 containerd[1462]: time="2025-09-13T00:17:03.098981185Z" level=info msg="StartContainer for \"486306c121f0d259faee3176e2368222084fac91d9d5f04f6357aa855d042787\"" Sep 13 00:17:03.149988 systemd[1]: Started cri-containerd-486306c121f0d259faee3176e2368222084fac91d9d5f04f6357aa855d042787.scope - libcontainer container 486306c121f0d259faee3176e2368222084fac91d9d5f04f6357aa855d042787. Sep 13 00:17:03.344443 containerd[1462]: time="2025-09-13T00:17:03.344285942Z" level=info msg="CreateContainer within sandbox \"76ede12d082e7391ba86f2c55aa89e3ae43d98e88b5ae273e1dc58161b5c17ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abd832f87b12941f77ed6cfce9258fc7826c29c3efab8f4ae566eddc848d9bb4\"" Sep 13 00:17:03.345093 containerd[1462]: time="2025-09-13T00:17:03.344317392Z" level=info msg="CreateContainer within sandbox \"924e8a80c92603e0dca14738dee09b35fc39673a0fd8358aff649325f67db7fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"59ac07a47f93f0cba8767a6a0edfc6d7172b0451fb58d4eb9fe985bde12459d3\"" Sep 13 00:17:03.345093 containerd[1462]: time="2025-09-13T00:17:03.344323904Z" level=info msg="StartContainer for \"486306c121f0d259faee3176e2368222084fac91d9d5f04f6357aa855d042787\" returns successfully" Sep 13 00:17:03.347027 containerd[1462]: time="2025-09-13T00:17:03.345673878Z" level=info msg="StartContainer for \"59ac07a47f93f0cba8767a6a0edfc6d7172b0451fb58d4eb9fe985bde12459d3\"" Sep 13 00:17:03.347027 containerd[1462]: time="2025-09-13T00:17:03.345790189Z" level=info msg="StartContainer for \"abd832f87b12941f77ed6cfce9258fc7826c29c3efab8f4ae566eddc848d9bb4\"" Sep 13 00:17:03.356699 kubelet[2175]: E0913 00:17:03.356648 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:03.357221 kubelet[2175]: E0913 00:17:03.357189 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:03.395462 systemd[1]: run-containerd-runc-k8s.io-abd832f87b12941f77ed6cfce9258fc7826c29c3efab8f4ae566eddc848d9bb4-runc.0JykmV.mount: Deactivated successfully. Sep 13 00:17:03.405727 systemd[1]: Started cri-containerd-abd832f87b12941f77ed6cfce9258fc7826c29c3efab8f4ae566eddc848d9bb4.scope - libcontainer container abd832f87b12941f77ed6cfce9258fc7826c29c3efab8f4ae566eddc848d9bb4. Sep 13 00:17:03.409643 systemd[1]: Started cri-containerd-59ac07a47f93f0cba8767a6a0edfc6d7172b0451fb58d4eb9fe985bde12459d3.scope - libcontainer container 59ac07a47f93f0cba8767a6a0edfc6d7172b0451fb58d4eb9fe985bde12459d3. Sep 13 00:17:03.547477 containerd[1462]: time="2025-09-13T00:17:03.547386936Z" level=info msg="StartContainer for \"abd832f87b12941f77ed6cfce9258fc7826c29c3efab8f4ae566eddc848d9bb4\" returns successfully" Sep 13 00:17:03.547656 containerd[1462]: time="2025-09-13T00:17:03.547587307Z" level=info msg="StartContainer for \"59ac07a47f93f0cba8767a6a0edfc6d7172b0451fb58d4eb9fe985bde12459d3\" returns successfully" Sep 13 00:17:04.359581 kubelet[2175]: E0913 00:17:04.359354 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:04.359581 kubelet[2175]: E0913 00:17:04.359479 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:04.360449 kubelet[2175]: E0913 00:17:04.360420 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:04.360526 kubelet[2175]: E0913 00:17:04.360516 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:04.370588 kubelet[2175]: I0913 00:17:04.370439 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:17:05.285787 kubelet[2175]: E0913 00:17:05.285729 2175 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:17:05.365163 kubelet[2175]: E0913 00:17:05.365120 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:05.365701 kubelet[2175]: E0913 00:17:05.365262 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:05.365701 kubelet[2175]: E0913 00:17:05.365279 2175 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:17:05.365701 kubelet[2175]: E0913 00:17:05.365451 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:05.447436 kubelet[2175]: E0913 00:17:05.447281 2175 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864af693fedaabb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:17:01.160303291 +0000 UTC m=+1.173351501,LastTimestamp:2025-09-13 00:17:01.160303291 +0000 UTC m=+1.173351501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:17:05.810829 kubelet[2175]: I0913 00:17:05.810741 2175 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:17:05.855495 kubelet[2175]: E0913 00:17:05.855278 2175 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864af6940a8f596 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:17:01.172577686 +0000 UTC m=+1.185625896,LastTimestamp:2025-09-13 00:17:01.172577686 +0000 UTC m=+1.185625896,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:17:05.875477 kubelet[2175]: I0913 00:17:05.875430 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:05.901801 kubelet[2175]: E0913 00:17:05.901754 2175 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:05.901801 kubelet[2175]: I0913 00:17:05.901786 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:05.903484 kubelet[2175]: E0913 00:17:05.903439 2175 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:05.903484 kubelet[2175]: I0913 00:17:05.903472 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:05.904872 kubelet[2175]: E0913 00:17:05.904846 2175 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:06.155128 kubelet[2175]: I0913 00:17:06.154961 2175 apiserver.go:52] "Watching apiserver" Sep 13 00:17:06.175464 kubelet[2175]: I0913 00:17:06.175393 2175 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:17:06.362174 kubelet[2175]: I0913 00:17:06.362121 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:06.362325 kubelet[2175]: I0913 00:17:06.362245 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:06.364018 kubelet[2175]: E0913 00:17:06.363966 2175 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:06.364018 kubelet[2175]: E0913 00:17:06.364004 2175 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:06.364255 kubelet[2175]: E0913 00:17:06.364134 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:06.364255 kubelet[2175]: E0913 00:17:06.364181 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:07.340177 kubelet[2175]: I0913 00:17:07.340128 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:07.364233 kubelet[2175]: I0913 00:17:07.364182 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:07.400914 kubelet[2175]: E0913 00:17:07.400860 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:07.405780 kubelet[2175]: E0913 00:17:07.405716 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:07.441717 kubelet[2175]: I0913 00:17:07.441664 2175 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:07.480220 kubelet[2175]: E0913 00:17:07.480176 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:08.365372 kubelet[2175]: E0913 00:17:08.365322 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:08.365372 kubelet[2175]: E0913 00:17:08.365322 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:08.365973 kubelet[2175]: E0913 00:17:08.365421 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:09.462483 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-9.scope)... Sep 13 00:17:09.462508 systemd[1]: Reloading... Sep 13 00:17:09.576632 zram_generator::config[2516]: No configuration found. Sep 13 00:17:09.722467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:17:09.848042 systemd[1]: Reloading finished in 385 ms. Sep 13 00:17:09.906867 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:09.932943 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:17:09.933380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:09.933469 systemd[1]: kubelet.service: Consumed 1.502s CPU time, 138.2M memory peak, 0B memory swap peak. Sep 13 00:17:09.942171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:10.142449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:10.150056 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:17:10.207495 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:10.207495 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:17:10.207495 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:10.208042 kubelet[2558]: I0913 00:17:10.207507 2558 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:17:10.217178 kubelet[2558]: I0913 00:17:10.217096 2558 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:17:10.217178 kubelet[2558]: I0913 00:17:10.217162 2558 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:17:10.217518 kubelet[2558]: I0913 00:17:10.217440 2558 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:17:10.219604 kubelet[2558]: I0913 00:17:10.219526 2558 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:17:10.222320 kubelet[2558]: I0913 00:17:10.222180 2558 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:17:10.226271 kubelet[2558]: E0913 00:17:10.226237 2558 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:17:10.226271 kubelet[2558]: I0913 00:17:10.226270 2558 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:17:10.233811 kubelet[2558]: I0913 00:17:10.233750 2558 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:17:10.234099 kubelet[2558]: I0913 00:17:10.234058 2558 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:17:10.234268 kubelet[2558]: I0913 00:17:10.234100 2558 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:17:10.234353 kubelet[2558]: I0913 00:17:10.234285 2558 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:17:10.234353 kubelet[2558]: I0913 00:17:10.234297 2558 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:17:10.234353 kubelet[2558]: I0913 00:17:10.234352 2558 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:10.234610 kubelet[2558]: I0913 00:17:10.234596 2558 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:17:10.234664 kubelet[2558]: I0913 00:17:10.234614 2558 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:17:10.234664 kubelet[2558]: I0913 00:17:10.234643 2558 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:17:10.236928 kubelet[2558]: I0913 00:17:10.236864 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:17:10.240111 kubelet[2558]: I0913 00:17:10.240073 2558 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:17:10.240810 kubelet[2558]: I0913 00:17:10.240771 2558 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:17:10.244680 kubelet[2558]: I0913 00:17:10.244651 2558 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:17:10.244788 kubelet[2558]: I0913 00:17:10.244707 2558 server.go:1289] "Started kubelet" Sep 13 00:17:10.246081 kubelet[2558]: I0913 00:17:10.246052 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:17:10.251368 kubelet[2558]: I0913 00:17:10.251324 2558 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:17:10.252475 kubelet[2558]: I0913 00:17:10.252408 2558 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:17:10.253319 kubelet[2558]: I0913 00:17:10.253272 2558 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:17:10.253865 kubelet[2558]: I0913 00:17:10.253392 2558 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:17:10.253865 kubelet[2558]: I0913 00:17:10.253628 2558 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:17:10.255119 kubelet[2558]: I0913 00:17:10.255030 2558 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:17:10.257298 kubelet[2558]: E0913 00:17:10.257256 2558 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:17:10.259153 kubelet[2558]: I0913 00:17:10.257619 2558 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:17:10.259153 kubelet[2558]: I0913 00:17:10.257668 2558 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:17:10.259153 kubelet[2558]: I0913 00:17:10.257985 2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:17:10.259153 kubelet[2558]: I0913 00:17:10.258749 2558 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:17:10.259153 kubelet[2558]: I0913 00:17:10.258865 2558 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:17:10.266634 kubelet[2558]: I0913 00:17:10.266307 2558 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:17:10.268256 kubelet[2558]: I0913 00:17:10.267857 2558 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:17:10.268256 kubelet[2558]: I0913 00:17:10.267884 2558 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:17:10.268256 kubelet[2558]: I0913 00:17:10.267912 2558 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:17:10.268256 kubelet[2558]: I0913 00:17:10.267920 2558 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:17:10.268256 kubelet[2558]: E0913 00:17:10.267976 2558 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:17:10.319668 kubelet[2558]: I0913 00:17:10.319615 2558 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:17:10.319668 kubelet[2558]: I0913 00:17:10.319639 2558 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:17:10.319668 kubelet[2558]: I0913 00:17:10.319663 2558 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:10.319941 kubelet[2558]: I0913 00:17:10.319846 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:17:10.319941 kubelet[2558]: I0913 00:17:10.319858 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:17:10.319941 kubelet[2558]: I0913 00:17:10.319877 2558 policy_none.go:49] "None policy: Start" Sep 13 00:17:10.319941 kubelet[2558]: I0913 00:17:10.319889 2558 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:17:10.319941 kubelet[2558]: I0913 00:17:10.319901 2558 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:17:10.320074 kubelet[2558]: I0913 00:17:10.320005 2558 state_mem.go:75] "Updated machine memory state" Sep 13 00:17:10.324969 kubelet[2558]: E0913 00:17:10.324868 2558 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:17:10.327852 kubelet[2558]: I0913 00:17:10.325107 2558 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:17:10.327852 kubelet[2558]: I0913 00:17:10.325126 2558 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:17:10.327852 kubelet[2558]: I0913 00:17:10.325535 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:17:10.327852 kubelet[2558]: E0913 00:17:10.327227 2558 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:17:10.369689 kubelet[2558]: I0913 00:17:10.369603 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:10.370062 kubelet[2558]: I0913 00:17:10.369676 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:10.370169 kubelet[2558]: I0913 00:17:10.369776 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:10.379314 kubelet[2558]: E0913 00:17:10.379251 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:10.379457 kubelet[2558]: E0913 00:17:10.379412 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:10.379521 kubelet[2558]: E0913 00:17:10.379500 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:10.434192 kubelet[2558]: I0913 00:17:10.434035 2558 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:17:10.446147 kubelet[2558]: I0913 00:17:10.446078 2558 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:17:10.446408 kubelet[2558]: I0913 00:17:10.446262 2558 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:17:10.455395 kubelet[2558]: I0913 00:17:10.455328 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3322793d0626e63e0834b407d927d4b3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3322793d0626e63e0834b407d927d4b3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:10.455395 kubelet[2558]: I0913 00:17:10.455387 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:10.455616 kubelet[2558]: I0913 00:17:10.455428 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3322793d0626e63e0834b407d927d4b3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3322793d0626e63e0834b407d927d4b3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:10.455616 kubelet[2558]: I0913 00:17:10.455453 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3322793d0626e63e0834b407d927d4b3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3322793d0626e63e0834b407d927d4b3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:10.455616 kubelet[2558]: I0913 00:17:10.455482 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:10.455616 kubelet[2558]: I0913 00:17:10.455503 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:10.455616 kubelet[2558]: I0913 00:17:10.455522 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:10.455809 kubelet[2558]: I0913 00:17:10.455560 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:10.455809 kubelet[2558]: I0913 00:17:10.455589 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:10.680391 kubelet[2558]: E0913 00:17:10.680271 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:10.680391 kubelet[2558]: E0913 00:17:10.680325 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:10.680391 kubelet[2558]: E0913 00:17:10.680325 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:11.237870 kubelet[2558]: I0913 00:17:11.237656 2558 apiserver.go:52] "Watching apiserver" Sep 13 00:17:11.254581 kubelet[2558]: I0913 00:17:11.254498 2558 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:17:11.284426 kubelet[2558]: I0913 00:17:11.284380 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:11.284426 kubelet[2558]: E0913 00:17:11.284427 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:11.284772 kubelet[2558]: I0913 00:17:11.284718 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:11.470104 kubelet[2558]: E0913 00:17:11.470042 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:11.470589 kubelet[2558]: E0913 00:17:11.470519 2558 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:11.472436 kubelet[2558]: E0913 00:17:11.471262 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:11.474025 kubelet[2558]: E0913 00:17:11.473897 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:11.502583 kubelet[2558]: I0913 00:17:11.499364 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.499333436 podStartE2EDuration="4.499333436s" podCreationTimestamp="2025-09-13 00:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:11.497486376 +0000 UTC m=+1.340718616" watchObservedRunningTime="2025-09-13 00:17:11.499333436 +0000 UTC m=+1.342565666" Sep 13 00:17:11.506213 kubelet[2558]: I0913 00:17:11.506157 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.506145308 podStartE2EDuration="4.506145308s" podCreationTimestamp="2025-09-13 00:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:11.50577019 +0000 UTC m=+1.349002420" watchObservedRunningTime="2025-09-13 00:17:11.506145308 +0000 UTC m=+1.349377538" Sep 13 00:17:11.537492 kubelet[2558]: I0913 00:17:11.537053 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.537030934 podStartE2EDuration="4.537030934s" podCreationTimestamp="2025-09-13 00:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:11.520735229 +0000 UTC m=+1.363967459" watchObservedRunningTime="2025-09-13 00:17:11.537030934 +0000 UTC m=+1.380263164" Sep 13 00:17:12.285948 kubelet[2558]: E0913 00:17:12.285623 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:12.285948 kubelet[2558]: E0913 00:17:12.285639 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:12.285948 kubelet[2558]: E0913 00:17:12.285821 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:14.056587 kubelet[2558]: I0913 00:17:14.056511 2558 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:17:14.057327 kubelet[2558]: I0913 00:17:14.057273 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:17:14.057368 containerd[1462]: time="2025-09-13T00:17:14.057069523Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:17:14.060002 kubelet[2558]: E0913 00:17:14.059968 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:14.289715 kubelet[2558]: E0913 00:17:14.289665 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:15.939183 kubelet[2558]: E0913 00:17:15.939131 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:16.477569 kubelet[2558]: E0913 00:17:16.477496 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:17.079307 systemd[1]: Created slice kubepods-besteffort-podc5143ece_e31c_49eb_91c0_d10caf9baf30.slice - libcontainer container kubepods-besteffort-podc5143ece_e31c_49eb_91c0_d10caf9baf30.slice. Sep 13 00:17:17.095556 kubelet[2558]: I0913 00:17:17.095510 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5143ece-e31c-49eb-91c0-d10caf9baf30-xtables-lock\") pod \"kube-proxy-59l89\" (UID: \"c5143ece-e31c-49eb-91c0-d10caf9baf30\") " pod="kube-system/kube-proxy-59l89" Sep 13 00:17:17.095864 kubelet[2558]: I0913 00:17:17.095564 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5143ece-e31c-49eb-91c0-d10caf9baf30-kube-proxy\") pod \"kube-proxy-59l89\" (UID: \"c5143ece-e31c-49eb-91c0-d10caf9baf30\") " pod="kube-system/kube-proxy-59l89" Sep 13 00:17:17.095864 kubelet[2558]: I0913 00:17:17.095585 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5143ece-e31c-49eb-91c0-d10caf9baf30-lib-modules\") pod \"kube-proxy-59l89\" (UID: \"c5143ece-e31c-49eb-91c0-d10caf9baf30\") " pod="kube-system/kube-proxy-59l89" Sep 13 00:17:17.095864 kubelet[2558]: I0913 00:17:17.095638 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92l8\" (UniqueName: \"kubernetes.io/projected/c5143ece-e31c-49eb-91c0-d10caf9baf30-kube-api-access-j92l8\") pod \"kube-proxy-59l89\" (UID: \"c5143ece-e31c-49eb-91c0-d10caf9baf30\") " pod="kube-system/kube-proxy-59l89" Sep 13 00:17:17.692028 kubelet[2558]: E0913 00:17:17.691952 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:17.692906 containerd[1462]: time="2025-09-13T00:17:17.692836587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59l89,Uid:c5143ece-e31c-49eb-91c0-d10caf9baf30,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:17.831587 systemd[1]: Created slice kubepods-besteffort-pod0cc848ac_9861_4cb1_91a5_2f48da661934.slice - libcontainer container kubepods-besteffort-pod0cc848ac_9861_4cb1_91a5_2f48da661934.slice. Sep 13 00:17:17.838965 containerd[1462]: time="2025-09-13T00:17:17.837963487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:17.839125 containerd[1462]: time="2025-09-13T00:17:17.839013316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:17.839125 containerd[1462]: time="2025-09-13T00:17:17.839083769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:17.839363 containerd[1462]: time="2025-09-13T00:17:17.839310006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:17.872145 systemd[1]: Started cri-containerd-7a5e645c77102d81d9f8c9088e02992d84f66e9830b90003bbd1371f7e9ffa8e.scope - libcontainer container 7a5e645c77102d81d9f8c9088e02992d84f66e9830b90003bbd1371f7e9ffa8e. Sep 13 00:17:17.899979 kubelet[2558]: I0913 00:17:17.899880 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0cc848ac-9861-4cb1-91a5-2f48da661934-var-lib-calico\") pod \"tigera-operator-755d956888-6d986\" (UID: \"0cc848ac-9861-4cb1-91a5-2f48da661934\") " pod="tigera-operator/tigera-operator-755d956888-6d986" Sep 13 00:17:17.899979 kubelet[2558]: I0913 00:17:17.899952 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9vp2\" (UniqueName: \"kubernetes.io/projected/0cc848ac-9861-4cb1-91a5-2f48da661934-kube-api-access-q9vp2\") pod \"tigera-operator-755d956888-6d986\" (UID: \"0cc848ac-9861-4cb1-91a5-2f48da661934\") " pod="tigera-operator/tigera-operator-755d956888-6d986" Sep 13 00:17:17.903353 containerd[1462]: time="2025-09-13T00:17:17.903296797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59l89,Uid:c5143ece-e31c-49eb-91c0-d10caf9baf30,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a5e645c77102d81d9f8c9088e02992d84f66e9830b90003bbd1371f7e9ffa8e\"" Sep 13 00:17:17.904913 kubelet[2558]: E0913 00:17:17.904523 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:18.095006 containerd[1462]: time="2025-09-13T00:17:18.093849763Z" level=info msg="CreateContainer within sandbox \"7a5e645c77102d81d9f8c9088e02992d84f66e9830b90003bbd1371f7e9ffa8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:17:18.137402 containerd[1462]: time="2025-09-13T00:17:18.137351432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6d986,Uid:0cc848ac-9861-4cb1-91a5-2f48da661934,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:17:18.406422 containerd[1462]: time="2025-09-13T00:17:18.405352028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:18.406422 containerd[1462]: time="2025-09-13T00:17:18.406225654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:18.406422 containerd[1462]: time="2025-09-13T00:17:18.406241665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:18.406422 containerd[1462]: time="2025-09-13T00:17:18.406351662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:18.432908 systemd[1]: Started cri-containerd-515546de279dfef45a84d97c91bfbab0ca7f6f537eb667ea70219dba9ab168a8.scope - libcontainer container 515546de279dfef45a84d97c91bfbab0ca7f6f537eb667ea70219dba9ab168a8. Sep 13 00:17:18.472935 containerd[1462]: time="2025-09-13T00:17:18.472777351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6d986,Uid:0cc848ac-9861-4cb1-91a5-2f48da661934,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"515546de279dfef45a84d97c91bfbab0ca7f6f537eb667ea70219dba9ab168a8\"" Sep 13 00:17:18.474658 containerd[1462]: time="2025-09-13T00:17:18.474607420Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:17:18.538049 kubelet[2558]: E0913 00:17:18.537992 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:18.599794 containerd[1462]: time="2025-09-13T00:17:18.599509758Z" level=info msg="CreateContainer within sandbox \"7a5e645c77102d81d9f8c9088e02992d84f66e9830b90003bbd1371f7e9ffa8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b38d3e79b88de747e92f0e768b3fbac8f74caa68b14b35983b4376d1d4acd78\"" Sep 13 00:17:18.600470 containerd[1462]: time="2025-09-13T00:17:18.600419923Z" level=info msg="StartContainer for \"0b38d3e79b88de747e92f0e768b3fbac8f74caa68b14b35983b4376d1d4acd78\"" Sep 13 00:17:18.634819 systemd[1]: Started cri-containerd-0b38d3e79b88de747e92f0e768b3fbac8f74caa68b14b35983b4376d1d4acd78.scope - libcontainer container 0b38d3e79b88de747e92f0e768b3fbac8f74caa68b14b35983b4376d1d4acd78. Sep 13 00:17:18.795169 containerd[1462]: time="2025-09-13T00:17:18.795086471Z" level=info msg="StartContainer for \"0b38d3e79b88de747e92f0e768b3fbac8f74caa68b14b35983b4376d1d4acd78\" returns successfully" Sep 13 00:17:19.301064 kubelet[2558]: E0913 00:17:19.301007 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:19.302270 kubelet[2558]: E0913 00:17:19.302192 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:19.527224 kubelet[2558]: I0913 00:17:19.527144 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59l89" podStartSLOduration=4.527123892 podStartE2EDuration="4.527123892s" podCreationTimestamp="2025-09-13 00:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:19.526514935 +0000 UTC m=+9.369747155" watchObservedRunningTime="2025-09-13 00:17:19.527123892 +0000 UTC m=+9.370356122" Sep 13 00:17:20.304999 kubelet[2558]: E0913 00:17:20.304670 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:20.305471 kubelet[2558]: E0913 00:17:20.304997 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:22.671410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771004483.mount: Deactivated successfully. Sep 13 00:17:23.173303 containerd[1462]: time="2025-09-13T00:17:23.173239637Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:23.174274 containerd[1462]: time="2025-09-13T00:17:23.174225241Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:17:23.176604 containerd[1462]: time="2025-09-13T00:17:23.175944126Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:23.178239 containerd[1462]: time="2025-09-13T00:17:23.178197167Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:23.179046 containerd[1462]: time="2025-09-13T00:17:23.179004005Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.704355428s" Sep 13 00:17:23.179113 containerd[1462]: time="2025-09-13T00:17:23.179046334Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:17:23.187919 containerd[1462]: time="2025-09-13T00:17:23.187864590Z" level=info msg="CreateContainer within sandbox \"515546de279dfef45a84d97c91bfbab0ca7f6f537eb667ea70219dba9ab168a8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:17:23.208832 containerd[1462]: time="2025-09-13T00:17:23.208768531Z" level=info msg="CreateContainer within sandbox \"515546de279dfef45a84d97c91bfbab0ca7f6f537eb667ea70219dba9ab168a8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"38a70f1eb3862a6741f23716fb628461216ab50bd8737f86246d7959e3cd727c\"" Sep 13 00:17:23.210113 containerd[1462]: time="2025-09-13T00:17:23.209924967Z" level=info msg="StartContainer for \"38a70f1eb3862a6741f23716fb628461216ab50bd8737f86246d7959e3cd727c\"" Sep 13 00:17:23.243836 systemd[1]: Started cri-containerd-38a70f1eb3862a6741f23716fb628461216ab50bd8737f86246d7959e3cd727c.scope - libcontainer container 38a70f1eb3862a6741f23716fb628461216ab50bd8737f86246d7959e3cd727c. Sep 13 00:17:23.289161 containerd[1462]: time="2025-09-13T00:17:23.288961582Z" level=info msg="StartContainer for \"38a70f1eb3862a6741f23716fb628461216ab50bd8737f86246d7959e3cd727c\" returns successfully" Sep 13 00:17:28.037391 sudo[1653]: pam_unix(sudo:session): session closed for user root Sep 13 00:17:28.042331 sshd[1650]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:28.048706 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:43718.service: Deactivated successfully. Sep 13 00:17:28.052664 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:17:28.054818 systemd[1]: session-9.scope: Consumed 6.353s CPU time, 160.3M memory peak, 0B memory swap peak. Sep 13 00:17:28.056779 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:17:28.058533 systemd-logind[1446]: Removed session 9. Sep 13 00:17:32.044667 kubelet[2558]: I0913 00:17:32.043025 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-6d986" podStartSLOduration=10.335781278 podStartE2EDuration="15.04300246s" podCreationTimestamp="2025-09-13 00:17:17 +0000 UTC" firstStartedPulling="2025-09-13 00:17:18.474185054 +0000 UTC m=+8.317417284" lastFinishedPulling="2025-09-13 00:17:23.181406236 +0000 UTC m=+13.024638466" observedRunningTime="2025-09-13 00:17:23.323348177 +0000 UTC m=+13.166580407" watchObservedRunningTime="2025-09-13 00:17:32.04300246 +0000 UTC m=+21.886234690" Sep 13 00:17:32.062503 systemd[1]: Created slice kubepods-besteffort-pode0b9b364_e5b7_49c6_9a64_ff630dbca91b.slice - libcontainer container kubepods-besteffort-pode0b9b364_e5b7_49c6_9a64_ff630dbca91b.slice. Sep 13 00:17:32.098827 kubelet[2558]: I0913 00:17:32.098614 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b9b364-e5b7-49c6-9a64-ff630dbca91b-tigera-ca-bundle\") pod \"calico-typha-58494cc8cd-6846c\" (UID: \"e0b9b364-e5b7-49c6-9a64-ff630dbca91b\") " pod="calico-system/calico-typha-58494cc8cd-6846c" Sep 13 00:17:32.098827 kubelet[2558]: I0913 00:17:32.098739 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0b9b364-e5b7-49c6-9a64-ff630dbca91b-typha-certs\") pod \"calico-typha-58494cc8cd-6846c\" (UID: \"e0b9b364-e5b7-49c6-9a64-ff630dbca91b\") " pod="calico-system/calico-typha-58494cc8cd-6846c" Sep 13 00:17:32.098827 kubelet[2558]: I0913 00:17:32.098767 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6wh\" (UniqueName: \"kubernetes.io/projected/e0b9b364-e5b7-49c6-9a64-ff630dbca91b-kube-api-access-jh6wh\") pod \"calico-typha-58494cc8cd-6846c\" (UID: \"e0b9b364-e5b7-49c6-9a64-ff630dbca91b\") " pod="calico-system/calico-typha-58494cc8cd-6846c" Sep 13 00:17:32.146051 systemd[1]: Created slice kubepods-besteffort-podbb3082a8_43c5_461c_b5e0_fe6a4ee01a64.slice - libcontainer container kubepods-besteffort-podbb3082a8_43c5_461c_b5e0_fe6a4ee01a64.slice. Sep 13 00:17:32.199194 kubelet[2558]: I0913 00:17:32.199128 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-var-run-calico\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.199194 kubelet[2558]: I0913 00:17:32.199188 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-lib-modules\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.199437 kubelet[2558]: I0913 00:17:32.199215 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-policysync\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.199437 kubelet[2558]: I0913 00:17:32.199293 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-cni-net-dir\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.199437 kubelet[2558]: I0913 00:17:32.199337 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-cni-bin-dir\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.199631 kubelet[2558]: I0913 00:17:32.199597 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-node-certs\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.200088 kubelet[2558]: I0913 00:17:32.199641 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-flexvol-driver-host\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.200088 kubelet[2558]: I0913 00:17:32.199668 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-var-lib-calico\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.200088 kubelet[2558]: I0913 00:17:32.199690 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz4xm\" (UniqueName: \"kubernetes.io/projected/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-kube-api-access-cz4xm\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.200088 kubelet[2558]: I0913 00:17:32.199714 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-cni-log-dir\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.200088 kubelet[2558]: I0913 00:17:32.199750 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-xtables-lock\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.200230 kubelet[2558]: I0913 00:17:32.199768 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb3082a8-43c5-461c-b5e0-fe6a4ee01a64-tigera-ca-bundle\") pod \"calico-node-f8rcf\" (UID: \"bb3082a8-43c5-461c-b5e0-fe6a4ee01a64\") " pod="calico-system/calico-node-f8rcf" Sep 13 00:17:32.252452 kubelet[2558]: E0913 00:17:32.252358 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:32.302340 kubelet[2558]: I0913 00:17:32.300742 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bee5869-7316-4315-890e-b413da2035a5-kubelet-dir\") pod \"csi-node-driver-cnh9g\" (UID: \"0bee5869-7316-4315-890e-b413da2035a5\") " pod="calico-system/csi-node-driver-cnh9g" Sep 13 00:17:32.302340 kubelet[2558]: I0913 00:17:32.300832 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfv9\" (UniqueName: \"kubernetes.io/projected/0bee5869-7316-4315-890e-b413da2035a5-kube-api-access-9bfv9\") pod \"csi-node-driver-cnh9g\" (UID: \"0bee5869-7316-4315-890e-b413da2035a5\") " pod="calico-system/csi-node-driver-cnh9g" Sep 13 00:17:32.302340 kubelet[2558]: I0913 00:17:32.300879 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0bee5869-7316-4315-890e-b413da2035a5-socket-dir\") pod \"csi-node-driver-cnh9g\" (UID: \"0bee5869-7316-4315-890e-b413da2035a5\") " pod="calico-system/csi-node-driver-cnh9g" Sep 13 00:17:32.302340 kubelet[2558]: I0913 00:17:32.300907 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0bee5869-7316-4315-890e-b413da2035a5-varrun\") pod \"csi-node-driver-cnh9g\" (UID: \"0bee5869-7316-4315-890e-b413da2035a5\") " pod="calico-system/csi-node-driver-cnh9g" Sep 13 00:17:32.302340 kubelet[2558]: I0913 00:17:32.301006 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0bee5869-7316-4315-890e-b413da2035a5-registration-dir\") pod \"csi-node-driver-cnh9g\" (UID: \"0bee5869-7316-4315-890e-b413da2035a5\") " pod="calico-system/csi-node-driver-cnh9g" Sep 13 00:17:32.318123 kubelet[2558]: E0913 00:17:32.318070 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.318123 kubelet[2558]: W0913 00:17:32.318118 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.318598 kubelet[2558]: E0913 00:17:32.318167 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.370165 kubelet[2558]: E0913 00:17:32.370109 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:32.371012 containerd[1462]: time="2025-09-13T00:17:32.370969089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58494cc8cd-6846c,Uid:e0b9b364-e5b7-49c6-9a64-ff630dbca91b,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:32.404277 kubelet[2558]: E0913 00:17:32.403661 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.404277 kubelet[2558]: W0913 00:17:32.403780 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.404277 kubelet[2558]: E0913 00:17:32.403809 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.406821 kubelet[2558]: E0913 00:17:32.406797 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.406821 kubelet[2558]: W0913 00:17:32.406817 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.406821 kubelet[2558]: E0913 00:17:32.406834 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.407520 kubelet[2558]: E0913 00:17:32.407307 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.407520 kubelet[2558]: W0913 00:17:32.407334 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.407520 kubelet[2558]: E0913 00:17:32.407362 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.407777 kubelet[2558]: E0913 00:17:32.407763 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.407846 kubelet[2558]: W0913 00:17:32.407835 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.407982 kubelet[2558]: E0913 00:17:32.407909 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.408273 kubelet[2558]: E0913 00:17:32.408261 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.408355 kubelet[2558]: W0913 00:17:32.408343 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.408561 kubelet[2558]: E0913 00:17:32.408421 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.408833 kubelet[2558]: E0913 00:17:32.408806 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.408922 kubelet[2558]: W0913 00:17:32.408910 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.409080 kubelet[2558]: E0913 00:17:32.408970 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.409277 kubelet[2558]: E0913 00:17:32.409259 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.409398 kubelet[2558]: W0913 00:17:32.409362 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.409398 kubelet[2558]: E0913 00:17:32.409381 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.409912 kubelet[2558]: E0913 00:17:32.409811 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.409912 kubelet[2558]: W0913 00:17:32.409822 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.409912 kubelet[2558]: E0913 00:17:32.409832 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.410143 kubelet[2558]: E0913 00:17:32.410108 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.410143 kubelet[2558]: W0913 00:17:32.410119 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.410143 kubelet[2558]: E0913 00:17:32.410130 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.410715 kubelet[2558]: E0913 00:17:32.410586 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.410715 kubelet[2558]: W0913 00:17:32.410601 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.410715 kubelet[2558]: E0913 00:17:32.410617 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.411141 kubelet[2558]: E0913 00:17:32.411043 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.411141 kubelet[2558]: W0913 00:17:32.411055 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.411141 kubelet[2558]: E0913 00:17:32.411065 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.411802 kubelet[2558]: E0913 00:17:32.411679 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.411802 kubelet[2558]: W0913 00:17:32.411730 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.411802 kubelet[2558]: E0913 00:17:32.411741 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.412265 kubelet[2558]: E0913 00:17:32.412252 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.412382 kubelet[2558]: W0913 00:17:32.412330 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.412382 kubelet[2558]: E0913 00:17:32.412345 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.413059 kubelet[2558]: E0913 00:17:32.412969 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.413059 kubelet[2558]: W0913 00:17:32.412985 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.413059 kubelet[2558]: E0913 00:17:32.413039 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.414063 kubelet[2558]: E0913 00:17:32.414005 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.414063 kubelet[2558]: W0913 00:17:32.414028 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.414374 kubelet[2558]: E0913 00:17:32.414215 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.415184 kubelet[2558]: E0913 00:17:32.415168 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.415292 kubelet[2558]: W0913 00:17:32.415277 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.415476 kubelet[2558]: E0913 00:17:32.415453 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.416143 kubelet[2558]: E0913 00:17:32.416079 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.416143 kubelet[2558]: W0913 00:17:32.416093 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.416143 kubelet[2558]: E0913 00:17:32.416106 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.416861 kubelet[2558]: E0913 00:17:32.416797 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.416861 kubelet[2558]: W0913 00:17:32.416811 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.416861 kubelet[2558]: E0913 00:17:32.416823 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.420536 kubelet[2558]: E0913 00:17:32.418687 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.420536 kubelet[2558]: W0913 00:17:32.418704 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.420536 kubelet[2558]: E0913 00:17:32.418717 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.421112 kubelet[2558]: E0913 00:17:32.421094 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.421584 kubelet[2558]: W0913 00:17:32.421196 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.421692 kubelet[2558]: E0913 00:17:32.421673 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.422479 kubelet[2558]: E0913 00:17:32.422459 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.422613 kubelet[2558]: W0913 00:17:32.422593 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.422748 kubelet[2558]: E0913 00:17:32.422729 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.423691 kubelet[2558]: E0913 00:17:32.423620 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.423775 kubelet[2558]: W0913 00:17:32.423757 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.423855 kubelet[2558]: E0913 00:17:32.423838 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.426847 kubelet[2558]: E0913 00:17:32.426716 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.426847 kubelet[2558]: W0913 00:17:32.426742 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.426847 kubelet[2558]: E0913 00:17:32.426764 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.428961 containerd[1462]: time="2025-09-13T00:17:32.428637091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:32.429056 containerd[1462]: time="2025-09-13T00:17:32.429011063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:32.430612 containerd[1462]: time="2025-09-13T00:17:32.429080924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:32.430612 containerd[1462]: time="2025-09-13T00:17:32.429863425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:32.430728 kubelet[2558]: E0913 00:17:32.429931 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.430728 kubelet[2558]: W0913 00:17:32.429954 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.430728 kubelet[2558]: E0913 00:17:32.429980 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.430853 kubelet[2558]: E0913 00:17:32.430821 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.430853 kubelet[2558]: W0913 00:17:32.430834 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.430853 kubelet[2558]: E0913 00:17:32.430848 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.432567 kubelet[2558]: E0913 00:17:32.431640 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:32.432567 kubelet[2558]: W0913 00:17:32.431658 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:32.432567 kubelet[2558]: E0913 00:17:32.431672 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:32.453457 containerd[1462]: time="2025-09-13T00:17:32.452971076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f8rcf,Uid:bb3082a8-43c5-461c-b5e0-fe6a4ee01a64,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:32.466880 systemd[1]: Started cri-containerd-46ada34be2571ca33824519ec912bbdced956f79db432d7b286fec5f657a0689.scope - libcontainer container 46ada34be2571ca33824519ec912bbdced956f79db432d7b286fec5f657a0689. Sep 13 00:17:32.492270 containerd[1462]: time="2025-09-13T00:17:32.492130008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:32.493214 containerd[1462]: time="2025-09-13T00:17:32.492310537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:32.493269 containerd[1462]: time="2025-09-13T00:17:32.493141769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:32.493776 containerd[1462]: time="2025-09-13T00:17:32.493404583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:32.519023 systemd[1]: Started cri-containerd-2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7.scope - libcontainer container 2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7. Sep 13 00:17:32.525381 containerd[1462]: time="2025-09-13T00:17:32.525334792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58494cc8cd-6846c,Uid:e0b9b364-e5b7-49c6-9a64-ff630dbca91b,Namespace:calico-system,Attempt:0,} returns sandbox id \"46ada34be2571ca33824519ec912bbdced956f79db432d7b286fec5f657a0689\"" Sep 13 00:17:32.526622 kubelet[2558]: E0913 00:17:32.526586 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:32.528250 containerd[1462]: time="2025-09-13T00:17:32.527994018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:17:32.548267 containerd[1462]: time="2025-09-13T00:17:32.548171172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f8rcf,Uid:bb3082a8-43c5-461c-b5e0-fe6a4ee01a64,Namespace:calico-system,Attempt:0,} returns sandbox id \"2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7\"" Sep 13 00:17:34.269678 kubelet[2558]: E0913 00:17:34.269281 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:35.076243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390871272.mount: Deactivated successfully. Sep 13 00:17:35.648368 containerd[1462]: time="2025-09-13T00:17:35.648289047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:35.649587 containerd[1462]: time="2025-09-13T00:17:35.649523426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:17:35.651462 containerd[1462]: time="2025-09-13T00:17:35.651432042Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:35.654563 containerd[1462]: time="2025-09-13T00:17:35.654508690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:35.655583 containerd[1462]: time="2025-09-13T00:17:35.655519830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.127489973s" Sep 13 00:17:35.655668 containerd[1462]: time="2025-09-13T00:17:35.655589461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:17:35.657114 containerd[1462]: time="2025-09-13T00:17:35.656936902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:17:35.684853 containerd[1462]: time="2025-09-13T00:17:35.684796384Z" level=info msg="CreateContainer within sandbox \"46ada34be2571ca33824519ec912bbdced956f79db432d7b286fec5f657a0689\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:17:35.704225 containerd[1462]: time="2025-09-13T00:17:35.704137464Z" level=info msg="CreateContainer within sandbox \"46ada34be2571ca33824519ec912bbdced956f79db432d7b286fec5f657a0689\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"82ec38e90b0701ab72bbbe89df71eccda8d05a507d1b1799abe45aef80416fb3\"" Sep 13 00:17:35.704783 containerd[1462]: time="2025-09-13T00:17:35.704755545Z" level=info msg="StartContainer for \"82ec38e90b0701ab72bbbe89df71eccda8d05a507d1b1799abe45aef80416fb3\"" Sep 13 00:17:35.741777 systemd[1]: Started cri-containerd-82ec38e90b0701ab72bbbe89df71eccda8d05a507d1b1799abe45aef80416fb3.scope - libcontainer container 82ec38e90b0701ab72bbbe89df71eccda8d05a507d1b1799abe45aef80416fb3. Sep 13 00:17:35.791995 containerd[1462]: time="2025-09-13T00:17:35.791942287Z" level=info msg="StartContainer for \"82ec38e90b0701ab72bbbe89df71eccda8d05a507d1b1799abe45aef80416fb3\" returns successfully" Sep 13 00:17:36.269535 kubelet[2558]: E0913 00:17:36.269409 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:36.341501 kubelet[2558]: E0913 00:17:36.341458 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:36.414424 kubelet[2558]: E0913 00:17:36.414369 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.414424 kubelet[2558]: W0913 00:17:36.414397 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.414424 kubelet[2558]: E0913 00:17:36.414421 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.414792 kubelet[2558]: E0913 00:17:36.414776 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.414792 kubelet[2558]: W0913 00:17:36.414788 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.414866 kubelet[2558]: E0913 00:17:36.414802 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.415111 kubelet[2558]: E0913 00:17:36.415093 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.415111 kubelet[2558]: W0913 00:17:36.415104 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.415111 kubelet[2558]: E0913 00:17:36.415112 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.415409 kubelet[2558]: E0913 00:17:36.415393 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.415409 kubelet[2558]: W0913 00:17:36.415405 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.415479 kubelet[2558]: E0913 00:17:36.415416 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.415690 kubelet[2558]: E0913 00:17:36.415673 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.415690 kubelet[2558]: W0913 00:17:36.415687 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.415759 kubelet[2558]: E0913 00:17:36.415697 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.415987 kubelet[2558]: E0913 00:17:36.415959 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.415987 kubelet[2558]: W0913 00:17:36.415976 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.415987 kubelet[2558]: E0913 00:17:36.415987 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.416271 kubelet[2558]: E0913 00:17:36.416239 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.416271 kubelet[2558]: W0913 00:17:36.416250 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.416271 kubelet[2558]: E0913 00:17:36.416259 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.416487 kubelet[2558]: E0913 00:17:36.416470 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.416487 kubelet[2558]: W0913 00:17:36.416480 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.416487 kubelet[2558]: E0913 00:17:36.416488 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.416721 kubelet[2558]: E0913 00:17:36.416705 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.416721 kubelet[2558]: W0913 00:17:36.416716 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.416786 kubelet[2558]: E0913 00:17:36.416726 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.416951 kubelet[2558]: E0913 00:17:36.416937 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.416951 kubelet[2558]: W0913 00:17:36.416947 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.417001 kubelet[2558]: E0913 00:17:36.416956 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.417148 kubelet[2558]: E0913 00:17:36.417135 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.417148 kubelet[2558]: W0913 00:17:36.417145 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.417210 kubelet[2558]: E0913 00:17:36.417153 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.417367 kubelet[2558]: E0913 00:17:36.417353 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.417367 kubelet[2558]: W0913 00:17:36.417364 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.417416 kubelet[2558]: E0913 00:17:36.417374 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.417691 kubelet[2558]: E0913 00:17:36.417671 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.417691 kubelet[2558]: W0913 00:17:36.417684 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.417782 kubelet[2558]: E0913 00:17:36.417696 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.417919 kubelet[2558]: E0913 00:17:36.417902 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.417919 kubelet[2558]: W0913 00:17:36.417912 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.417919 kubelet[2558]: E0913 00:17:36.417920 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.418129 kubelet[2558]: E0913 00:17:36.418113 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.418129 kubelet[2558]: W0913 00:17:36.418123 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.418215 kubelet[2558]: E0913 00:17:36.418132 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.435863 kubelet[2558]: E0913 00:17:36.435819 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.435863 kubelet[2558]: W0913 00:17:36.435849 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.435980 kubelet[2558]: E0913 00:17:36.435876 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.436201 kubelet[2558]: E0913 00:17:36.436170 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.436201 kubelet[2558]: W0913 00:17:36.436182 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.436269 kubelet[2558]: E0913 00:17:36.436209 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.436676 kubelet[2558]: E0913 00:17:36.436643 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.436676 kubelet[2558]: W0913 00:17:36.436670 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.436734 kubelet[2558]: E0913 00:17:36.436688 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.436957 kubelet[2558]: E0913 00:17:36.436923 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.436957 kubelet[2558]: W0913 00:17:36.436935 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.436957 kubelet[2558]: E0913 00:17:36.436943 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.437162 kubelet[2558]: E0913 00:17:36.437136 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.437162 kubelet[2558]: W0913 00:17:36.437147 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.437162 kubelet[2558]: E0913 00:17:36.437155 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.437472 kubelet[2558]: E0913 00:17:36.437456 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.437472 kubelet[2558]: W0913 00:17:36.437467 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.437561 kubelet[2558]: E0913 00:17:36.437476 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.437704 kubelet[2558]: E0913 00:17:36.437689 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.437704 kubelet[2558]: W0913 00:17:36.437700 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.437751 kubelet[2558]: E0913 00:17:36.437711 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.438047 kubelet[2558]: E0913 00:17:36.438002 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.438047 kubelet[2558]: W0913 00:17:36.438038 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.438265 kubelet[2558]: E0913 00:17:36.438070 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.438414 kubelet[2558]: E0913 00:17:36.438381 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.438414 kubelet[2558]: W0913 00:17:36.438394 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.438414 kubelet[2558]: E0913 00:17:36.438403 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.438669 kubelet[2558]: E0913 00:17:36.438654 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.438669 kubelet[2558]: W0913 00:17:36.438666 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.438743 kubelet[2558]: E0913 00:17:36.438679 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.438928 kubelet[2558]: E0913 00:17:36.438911 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.438928 kubelet[2558]: W0913 00:17:36.438922 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.438985 kubelet[2558]: E0913 00:17:36.438932 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.439323 kubelet[2558]: E0913 00:17:36.439303 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.439323 kubelet[2558]: W0913 00:17:36.439318 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.439407 kubelet[2558]: E0913 00:17:36.439330 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.439599 kubelet[2558]: E0913 00:17:36.439578 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.439599 kubelet[2558]: W0913 00:17:36.439595 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.439681 kubelet[2558]: E0913 00:17:36.439614 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.439903 kubelet[2558]: E0913 00:17:36.439886 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.439903 kubelet[2558]: W0913 00:17:36.439898 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.439985 kubelet[2558]: E0913 00:17:36.439910 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.440115 kubelet[2558]: E0913 00:17:36.440101 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.440115 kubelet[2558]: W0913 00:17:36.440111 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.440175 kubelet[2558]: E0913 00:17:36.440120 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.440378 kubelet[2558]: E0913 00:17:36.440360 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.440378 kubelet[2558]: W0913 00:17:36.440374 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.440449 kubelet[2558]: E0913 00:17:36.440385 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.440710 kubelet[2558]: E0913 00:17:36.440690 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.440710 kubelet[2558]: W0913 00:17:36.440705 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.440783 kubelet[2558]: E0913 00:17:36.440717 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:36.441102 kubelet[2558]: E0913 00:17:36.441086 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:36.441102 kubelet[2558]: W0913 00:17:36.441097 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:36.441179 kubelet[2558]: E0913 00:17:36.441108 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.341200 kubelet[2558]: I0913 00:17:37.341126 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:37.341784 kubelet[2558]: E0913 00:17:37.341637 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:37.398812 containerd[1462]: time="2025-09-13T00:17:37.398746743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:37.399711 containerd[1462]: time="2025-09-13T00:17:37.399675277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:17:37.401205 containerd[1462]: time="2025-09-13T00:17:37.401139137Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:37.403675 containerd[1462]: time="2025-09-13T00:17:37.403624825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:37.404321 containerd[1462]: time="2025-09-13T00:17:37.404250319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.747284142s" Sep 13 00:17:37.404321 containerd[1462]: time="2025-09-13T00:17:37.404302717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:17:37.410847 containerd[1462]: time="2025-09-13T00:17:37.410794029Z" level=info msg="CreateContainer within sandbox \"2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:17:37.424877 kubelet[2558]: E0913 00:17:37.424712 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.424877 kubelet[2558]: W0913 00:17:37.424738 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.424877 kubelet[2558]: E0913 00:17:37.424763 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.425155 kubelet[2558]: E0913 00:17:37.425120 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.425155 kubelet[2558]: W0913 00:17:37.425133 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.425155 kubelet[2558]: E0913 00:17:37.425144 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.425394 kubelet[2558]: E0913 00:17:37.425370 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.425394 kubelet[2558]: W0913 00:17:37.425382 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.425394 kubelet[2558]: E0913 00:17:37.425391 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.425752 kubelet[2558]: E0913 00:17:37.425736 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.425752 kubelet[2558]: W0913 00:17:37.425747 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.425820 kubelet[2558]: E0913 00:17:37.425757 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.426240 kubelet[2558]: E0913 00:17:37.426055 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.426240 kubelet[2558]: W0913 00:17:37.426076 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.426240 kubelet[2558]: E0913 00:17:37.426103 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.426436 kubelet[2558]: E0913 00:17:37.426408 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.426436 kubelet[2558]: W0913 00:17:37.426425 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.426490 kubelet[2558]: E0913 00:17:37.426437 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.426688 kubelet[2558]: E0913 00:17:37.426673 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.426688 kubelet[2558]: W0913 00:17:37.426684 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.426839 kubelet[2558]: E0913 00:17:37.426693 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.426880 kubelet[2558]: E0913 00:17:37.426864 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.426880 kubelet[2558]: W0913 00:17:37.426874 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.426936 kubelet[2558]: E0913 00:17:37.426883 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.427078 kubelet[2558]: E0913 00:17:37.427063 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.427078 kubelet[2558]: W0913 00:17:37.427073 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.427132 kubelet[2558]: E0913 00:17:37.427082 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.427271 kubelet[2558]: E0913 00:17:37.427249 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.427271 kubelet[2558]: W0913 00:17:37.427261 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.427271 kubelet[2558]: E0913 00:17:37.427270 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.427441 kubelet[2558]: E0913 00:17:37.427428 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.427441 kubelet[2558]: W0913 00:17:37.427438 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.427485 kubelet[2558]: E0913 00:17:37.427446 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.427635 kubelet[2558]: E0913 00:17:37.427621 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.427635 kubelet[2558]: W0913 00:17:37.427631 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.427686 kubelet[2558]: E0913 00:17:37.427640 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.427824 kubelet[2558]: E0913 00:17:37.427811 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.427824 kubelet[2558]: W0913 00:17:37.427821 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.427872 kubelet[2558]: E0913 00:17:37.427829 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.428003 kubelet[2558]: E0913 00:17:37.427989 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.428003 kubelet[2558]: W0913 00:17:37.427999 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.428044 kubelet[2558]: E0913 00:17:37.428007 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.428254 kubelet[2558]: E0913 00:17:37.428225 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.428254 kubelet[2558]: W0913 00:17:37.428239 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.428254 kubelet[2558]: E0913 00:17:37.428252 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.428425 containerd[1462]: time="2025-09-13T00:17:37.428243229Z" level=info msg="CreateContainer within sandbox \"2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db\"" Sep 13 00:17:37.428725 containerd[1462]: time="2025-09-13T00:17:37.428699276Z" level=info msg="StartContainer for \"9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db\"" Sep 13 00:17:37.446682 kubelet[2558]: E0913 00:17:37.446648 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.446682 kubelet[2558]: W0913 00:17:37.446673 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.446849 kubelet[2558]: E0913 00:17:37.446697 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.446993 kubelet[2558]: E0913 00:17:37.446978 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.446993 kubelet[2558]: W0913 00:17:37.446989 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.447049 kubelet[2558]: E0913 00:17:37.446998 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.447260 kubelet[2558]: E0913 00:17:37.447245 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.447260 kubelet[2558]: W0913 00:17:37.447257 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.447325 kubelet[2558]: E0913 00:17:37.447268 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.447719 kubelet[2558]: E0913 00:17:37.447685 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.447719 kubelet[2558]: W0913 00:17:37.447711 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.447786 kubelet[2558]: E0913 00:17:37.447735 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.448105 kubelet[2558]: E0913 00:17:37.448080 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.448105 kubelet[2558]: W0913 00:17:37.448094 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.448105 kubelet[2558]: E0913 00:17:37.448104 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.448439 kubelet[2558]: E0913 00:17:37.448415 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.448439 kubelet[2558]: W0913 00:17:37.448429 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.448439 kubelet[2558]: E0913 00:17:37.448439 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.448794 kubelet[2558]: E0913 00:17:37.448762 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.448794 kubelet[2558]: W0913 00:17:37.448786 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.448854 kubelet[2558]: E0913 00:17:37.448796 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.449030 kubelet[2558]: E0913 00:17:37.449014 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.449030 kubelet[2558]: W0913 00:17:37.449025 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.449089 kubelet[2558]: E0913 00:17:37.449035 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.449498 kubelet[2558]: E0913 00:17:37.449448 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.449498 kubelet[2558]: W0913 00:17:37.449464 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.449498 kubelet[2558]: E0913 00:17:37.449477 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.450102 kubelet[2558]: E0913 00:17:37.450030 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.450102 kubelet[2558]: W0913 00:17:37.450070 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.450102 kubelet[2558]: E0913 00:17:37.450082 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.450482 kubelet[2558]: E0913 00:17:37.450427 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.450482 kubelet[2558]: W0913 00:17:37.450444 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.450482 kubelet[2558]: E0913 00:17:37.450457 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.450815 kubelet[2558]: E0913 00:17:37.450797 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.450815 kubelet[2558]: W0913 00:17:37.450812 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.450868 kubelet[2558]: E0913 00:17:37.450824 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.451194 kubelet[2558]: E0913 00:17:37.451148 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.451194 kubelet[2558]: W0913 00:17:37.451170 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.451194 kubelet[2558]: E0913 00:17:37.451180 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.451613 kubelet[2558]: E0913 00:17:37.451503 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.451613 kubelet[2558]: W0913 00:17:37.451517 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.451613 kubelet[2558]: E0913 00:17:37.451577 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.452150 kubelet[2558]: E0913 00:17:37.452126 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.452150 kubelet[2558]: W0913 00:17:37.452144 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.452228 kubelet[2558]: E0913 00:17:37.452155 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.452440 kubelet[2558]: E0913 00:17:37.452417 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.452440 kubelet[2558]: W0913 00:17:37.452435 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.452497 kubelet[2558]: E0913 00:17:37.452446 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.454717 kubelet[2558]: E0913 00:17:37.452859 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.454717 kubelet[2558]: W0913 00:17:37.452880 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.454717 kubelet[2558]: E0913 00:17:37.452892 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.454717 kubelet[2558]: E0913 00:17:37.453177 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:37.454717 kubelet[2558]: W0913 00:17:37.453189 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:37.454717 kubelet[2558]: E0913 00:17:37.453199 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:37.466684 systemd[1]: Started cri-containerd-9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db.scope - libcontainer container 9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db. Sep 13 00:17:37.512303 systemd[1]: cri-containerd-9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db.scope: Deactivated successfully. Sep 13 00:17:37.851874 containerd[1462]: time="2025-09-13T00:17:37.851752807Z" level=info msg="StartContainer for \"9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db\" returns successfully" Sep 13 00:17:37.875766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db-rootfs.mount: Deactivated successfully. Sep 13 00:17:38.268920 kubelet[2558]: E0913 00:17:38.268853 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:38.375755 containerd[1462]: time="2025-09-13T00:17:38.368475320Z" level=info msg="shim disconnected" id=9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db namespace=k8s.io Sep 13 00:17:38.375755 containerd[1462]: time="2025-09-13T00:17:38.375733439Z" level=warning msg="cleaning up after shim disconnected" id=9ea95137a17b46f24bf3886ec946a23272651e478a128e4d934039007bfe45db namespace=k8s.io Sep 13 00:17:38.375755 containerd[1462]: time="2025-09-13T00:17:38.375762605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:38.390402 kubelet[2558]: I0913 00:17:38.390314 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58494cc8cd-6846c" podStartSLOduration=3.261150903 podStartE2EDuration="6.390290065s" podCreationTimestamp="2025-09-13 00:17:32 +0000 UTC" firstStartedPulling="2025-09-13 00:17:32.527647187 +0000 UTC m=+22.370879417" lastFinishedPulling="2025-09-13 00:17:35.656786349 +0000 UTC m=+25.500018579" observedRunningTime="2025-09-13 00:17:36.352343372 +0000 UTC m=+26.195575602" watchObservedRunningTime="2025-09-13 00:17:38.390290065 +0000 UTC m=+28.233522305" Sep 13 00:17:39.368210 containerd[1462]: time="2025-09-13T00:17:39.368149150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:17:40.271581 kubelet[2558]: E0913 00:17:40.269228 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:42.627537 kubelet[2558]: E0913 00:17:42.627480 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:43.876388 containerd[1462]: time="2025-09-13T00:17:43.876303978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.877600 containerd[1462]: time="2025-09-13T00:17:43.877489193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:17:43.879396 containerd[1462]: time="2025-09-13T00:17:43.879291095Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.882307 containerd[1462]: time="2025-09-13T00:17:43.882239530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.883046 containerd[1462]: time="2025-09-13T00:17:43.882983046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.514788059s" Sep 13 00:17:43.883046 containerd[1462]: time="2025-09-13T00:17:43.883036746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:17:43.890806 containerd[1462]: time="2025-09-13T00:17:43.890737011Z" level=info msg="CreateContainer within sandbox \"2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:17:43.912931 containerd[1462]: time="2025-09-13T00:17:43.912879858Z" level=info msg="CreateContainer within sandbox \"2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92\"" Sep 13 00:17:43.913578 containerd[1462]: time="2025-09-13T00:17:43.913529627Z" level=info msg="StartContainer for \"257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92\"" Sep 13 00:17:43.952722 systemd[1]: Started cri-containerd-257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92.scope - libcontainer container 257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92. Sep 13 00:17:44.010167 containerd[1462]: time="2025-09-13T00:17:44.010101890Z" level=info msg="StartContainer for \"257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92\" returns successfully" Sep 13 00:17:44.269021 kubelet[2558]: E0913 00:17:44.268937 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:45.374529 systemd[1]: cri-containerd-257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92.scope: Deactivated successfully. Sep 13 00:17:45.399440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92-rootfs.mount: Deactivated successfully. Sep 13 00:17:45.480576 kubelet[2558]: I0913 00:17:45.477733 2558 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:17:46.097315 containerd[1462]: time="2025-09-13T00:17:46.097160596Z" level=info msg="shim disconnected" id=257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92 namespace=k8s.io Sep 13 00:17:46.097315 containerd[1462]: time="2025-09-13T00:17:46.097238923Z" level=warning msg="cleaning up after shim disconnected" id=257d25bb91dff2aba5cc1a8087cbf526712ea08da5282b01b375aac4a70f9d92 namespace=k8s.io Sep 13 00:17:46.097315 containerd[1462]: time="2025-09-13T00:17:46.097250775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:46.191918 kubelet[2558]: E0913 00:17:46.191857 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:46.235108 kubelet[2558]: I0913 00:17:46.235026 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-ca-bundle\") pod \"whisker-59c5768889-w4vfq\" (UID: \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\") " pod="calico-system/whisker-59c5768889-w4vfq" Sep 13 00:17:46.235348 kubelet[2558]: I0913 00:17:46.235122 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-backend-key-pair\") pod \"whisker-59c5768889-w4vfq\" (UID: \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\") " pod="calico-system/whisker-59c5768889-w4vfq" Sep 13 00:17:46.235348 kubelet[2558]: I0913 00:17:46.235178 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrzs\" (UniqueName: \"kubernetes.io/projected/0cfe0c86-7016-4d90-9905-4eeb1e03db85-kube-api-access-7xrzs\") pod \"whisker-59c5768889-w4vfq\" (UID: \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\") " pod="calico-system/whisker-59c5768889-w4vfq" Sep 13 00:17:46.280133 systemd[1]: Created slice kubepods-besteffort-pod0cfe0c86_7016_4d90_9905_4eeb1e03db85.slice - libcontainer container kubepods-besteffort-pod0cfe0c86_7016_4d90_9905_4eeb1e03db85.slice. Sep 13 00:17:46.300023 systemd[1]: Created slice kubepods-burstable-pod631d57a2_dd3c_4c24_8d55_9feb2884e566.slice - libcontainer container kubepods-burstable-pod631d57a2_dd3c_4c24_8d55_9feb2884e566.slice. Sep 13 00:17:46.314687 systemd[1]: Created slice kubepods-besteffort-pod0bee5869_7316_4315_890e_b413da2035a5.slice - libcontainer container kubepods-besteffort-pod0bee5869_7316_4315_890e_b413da2035a5.slice. Sep 13 00:17:46.321594 containerd[1462]: time="2025-09-13T00:17:46.319944450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnh9g,Uid:0bee5869-7316-4315-890e-b413da2035a5,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:46.325047 systemd[1]: Created slice kubepods-besteffort-pod1f424473_224d_4e51_9ca9_c442b0bc325d.slice - libcontainer container kubepods-besteffort-pod1f424473_224d_4e51_9ca9_c442b0bc325d.slice. Sep 13 00:17:46.338828 kubelet[2558]: I0913 00:17:46.336272 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ck4d\" (UniqueName: \"kubernetes.io/projected/9994ca67-7eed-4733-95f7-6dbed4d7c37b-kube-api-access-2ck4d\") pod \"coredns-674b8bbfcf-n4ctb\" (UID: \"9994ca67-7eed-4733-95f7-6dbed4d7c37b\") " pod="kube-system/coredns-674b8bbfcf-n4ctb" Sep 13 00:17:46.338828 kubelet[2558]: I0913 00:17:46.336332 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/631d57a2-dd3c-4c24-8d55-9feb2884e566-config-volume\") pod \"coredns-674b8bbfcf-bg67g\" (UID: \"631d57a2-dd3c-4c24-8d55-9feb2884e566\") " pod="kube-system/coredns-674b8bbfcf-bg67g" Sep 13 00:17:46.338828 kubelet[2558]: I0913 00:17:46.336359 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9994ca67-7eed-4733-95f7-6dbed4d7c37b-config-volume\") pod \"coredns-674b8bbfcf-n4ctb\" (UID: \"9994ca67-7eed-4733-95f7-6dbed4d7c37b\") " pod="kube-system/coredns-674b8bbfcf-n4ctb" Sep 13 00:17:46.338828 kubelet[2558]: I0913 00:17:46.336399 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42dd6ade-572a-4087-84d6-79c32851c332-calico-apiserver-certs\") pod \"calico-apiserver-5567664f8d-nl824\" (UID: \"42dd6ade-572a-4087-84d6-79c32851c332\") " pod="calico-apiserver/calico-apiserver-5567664f8d-nl824" Sep 13 00:17:46.338828 kubelet[2558]: I0913 00:17:46.336441 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwt4l\" (UniqueName: \"kubernetes.io/projected/1f424473-224d-4e51-9ca9-c442b0bc325d-kube-api-access-kwt4l\") pod \"goldmane-54d579b49d-dgk4p\" (UID: \"1f424473-224d-4e51-9ca9-c442b0bc325d\") " pod="calico-system/goldmane-54d579b49d-dgk4p" Sep 13 00:17:46.339153 kubelet[2558]: I0913 00:17:46.336480 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncz99\" (UniqueName: \"kubernetes.io/projected/4bc675b9-f8ab-4324-abb1-fe64dccc6391-kube-api-access-ncz99\") pod \"calico-kube-controllers-5d8667ffb7-b5pnn\" (UID: \"4bc675b9-f8ab-4324-abb1-fe64dccc6391\") " pod="calico-system/calico-kube-controllers-5d8667ffb7-b5pnn" Sep 13 00:17:46.339153 kubelet[2558]: I0913 00:17:46.336508 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4z6c\" (UniqueName: \"kubernetes.io/projected/631d57a2-dd3c-4c24-8d55-9feb2884e566-kube-api-access-k4z6c\") pod \"coredns-674b8bbfcf-bg67g\" (UID: \"631d57a2-dd3c-4c24-8d55-9feb2884e566\") " pod="kube-system/coredns-674b8bbfcf-bg67g" Sep 13 00:17:46.339153 kubelet[2558]: I0913 00:17:46.336529 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1f424473-224d-4e51-9ca9-c442b0bc325d-goldmane-key-pair\") pod \"goldmane-54d579b49d-dgk4p\" (UID: \"1f424473-224d-4e51-9ca9-c442b0bc325d\") " pod="calico-system/goldmane-54d579b49d-dgk4p" Sep 13 00:17:46.339153 kubelet[2558]: I0913 00:17:46.336596 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc675b9-f8ab-4324-abb1-fe64dccc6391-tigera-ca-bundle\") pod \"calico-kube-controllers-5d8667ffb7-b5pnn\" (UID: \"4bc675b9-f8ab-4324-abb1-fe64dccc6391\") " pod="calico-system/calico-kube-controllers-5d8667ffb7-b5pnn" Sep 13 00:17:46.339153 kubelet[2558]: I0913 00:17:46.336732 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzt22\" (UniqueName: \"kubernetes.io/projected/42dd6ade-572a-4087-84d6-79c32851c332-kube-api-access-mzt22\") pod \"calico-apiserver-5567664f8d-nl824\" (UID: \"42dd6ade-572a-4087-84d6-79c32851c332\") " pod="calico-apiserver/calico-apiserver-5567664f8d-nl824" Sep 13 00:17:46.339344 kubelet[2558]: I0913 00:17:46.336781 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f424473-224d-4e51-9ca9-c442b0bc325d-config\") pod \"goldmane-54d579b49d-dgk4p\" (UID: \"1f424473-224d-4e51-9ca9-c442b0bc325d\") " pod="calico-system/goldmane-54d579b49d-dgk4p" Sep 13 00:17:46.339344 kubelet[2558]: I0913 00:17:46.336805 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f424473-224d-4e51-9ca9-c442b0bc325d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-dgk4p\" (UID: \"1f424473-224d-4e51-9ca9-c442b0bc325d\") " pod="calico-system/goldmane-54d579b49d-dgk4p" Sep 13 00:17:46.339344 kubelet[2558]: I0913 00:17:46.336830 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6748f3df-f396-46d5-b632-65b0d3fe85e1-calico-apiserver-certs\") pod \"calico-apiserver-5567664f8d-svmbp\" (UID: \"6748f3df-f396-46d5-b632-65b0d3fe85e1\") " pod="calico-apiserver/calico-apiserver-5567664f8d-svmbp" Sep 13 00:17:46.339344 kubelet[2558]: I0913 00:17:46.336861 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkmw\" (UniqueName: \"kubernetes.io/projected/6748f3df-f396-46d5-b632-65b0d3fe85e1-kube-api-access-qtkmw\") pod \"calico-apiserver-5567664f8d-svmbp\" (UID: \"6748f3df-f396-46d5-b632-65b0d3fe85e1\") " pod="calico-apiserver/calico-apiserver-5567664f8d-svmbp" Sep 13 00:17:46.340660 systemd[1]: Created slice kubepods-besteffort-pod42dd6ade_572a_4087_84d6_79c32851c332.slice - libcontainer container kubepods-besteffort-pod42dd6ade_572a_4087_84d6_79c32851c332.slice. Sep 13 00:17:46.361013 systemd[1]: Created slice kubepods-besteffort-pod4bc675b9_f8ab_4324_abb1_fe64dccc6391.slice - libcontainer container kubepods-besteffort-pod4bc675b9_f8ab_4324_abb1_fe64dccc6391.slice. Sep 13 00:17:46.367396 systemd[1]: Created slice kubepods-burstable-pod9994ca67_7eed_4733_95f7_6dbed4d7c37b.slice - libcontainer container kubepods-burstable-pod9994ca67_7eed_4733_95f7_6dbed4d7c37b.slice. Sep 13 00:17:46.372144 systemd[1]: Created slice kubepods-besteffort-pod6748f3df_f396_46d5_b632_65b0d3fe85e1.slice - libcontainer container kubepods-besteffort-pod6748f3df_f396_46d5_b632_65b0d3fe85e1.slice. Sep 13 00:17:46.551196 containerd[1462]: time="2025-09-13T00:17:46.551105025Z" level=error msg="Failed to destroy network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:46.556060 containerd[1462]: time="2025-09-13T00:17:46.555961970Z" level=error msg="encountered an error cleaning up failed sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:46.556060 containerd[1462]: time="2025-09-13T00:17:46.556066095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnh9g,Uid:0bee5869-7316-4315-890e-b413da2035a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:46.556495 kubelet[2558]: E0913 00:17:46.556441 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:46.556964 kubelet[2558]: E0913 00:17:46.556592 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cnh9g" Sep 13 00:17:46.556964 kubelet[2558]: E0913 00:17:46.556631 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cnh9g" Sep 13 00:17:46.556964 kubelet[2558]: E0913 00:17:46.556710 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cnh9g_calico-system(0bee5869-7316-4315-890e-b413da2035a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cnh9g_calico-system(0bee5869-7316-4315-890e-b413da2035a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:46.590925 containerd[1462]: time="2025-09-13T00:17:46.590841775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c5768889-w4vfq,Uid:0cfe0c86-7016-4d90-9905-4eeb1e03db85,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:46.610533 kubelet[2558]: E0913 00:17:46.610442 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:46.611660 containerd[1462]: time="2025-09-13T00:17:46.611306634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg67g,Uid:631d57a2-dd3c-4c24-8d55-9feb2884e566,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:46.631728 containerd[1462]: time="2025-09-13T00:17:46.631648664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dgk4p,Uid:1f424473-224d-4e51-9ca9-c442b0bc325d,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:46.633878 kubelet[2558]: I0913 00:17:46.633844 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:17:46.634641 containerd[1462]: time="2025-09-13T00:17:46.634604591Z" level=info msg="StopPodSandbox for \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\"" Sep 13 00:17:46.638128 containerd[1462]: time="2025-09-13T00:17:46.638104901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:17:46.646986 containerd[1462]: time="2025-09-13T00:17:46.646884249Z" level=info msg="Ensure that sandbox 5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64 in task-service has been cleanup successfully" Sep 13 00:17:46.652907 containerd[1462]: time="2025-09-13T00:17:46.652851007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-nl824,Uid:42dd6ade-572a-4087-84d6-79c32851c332,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:46.665795 containerd[1462]: time="2025-09-13T00:17:46.665734087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8667ffb7-b5pnn,Uid:4bc675b9-f8ab-4324-abb1-fe64dccc6391,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:46.672585 kubelet[2558]: E0913 00:17:46.670687 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:46.672757 containerd[1462]: time="2025-09-13T00:17:46.671322706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4ctb,Uid:9994ca67-7eed-4733-95f7-6dbed4d7c37b,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:46.682733 containerd[1462]: time="2025-09-13T00:17:46.682642662Z" level=error msg="StopPodSandbox for \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\" failed" error="failed to destroy network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:46.683088 kubelet[2558]: E0913 00:17:46.683022 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:17:46.683192 kubelet[2558]: E0913 00:17:46.683119 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64"} Sep 13 00:17:46.683232 kubelet[2558]: E0913 00:17:46.683202 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0bee5869-7316-4315-890e-b413da2035a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:46.683313 kubelet[2558]: E0913 00:17:46.683240 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0bee5869-7316-4315-890e-b413da2035a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cnh9g" podUID="0bee5869-7316-4315-890e-b413da2035a5" Sep 13 00:17:46.686958 containerd[1462]: time="2025-09-13T00:17:46.686908037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-svmbp,Uid:6748f3df-f396-46d5-b632-65b0d3fe85e1,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:47.081050 containerd[1462]: time="2025-09-13T00:17:47.080840806Z" level=error msg="Failed to destroy network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.081886 containerd[1462]: time="2025-09-13T00:17:47.081743159Z" level=error msg="encountered an error cleaning up failed sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.081886 containerd[1462]: time="2025-09-13T00:17:47.081815304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c5768889-w4vfq,Uid:0cfe0c86-7016-4d90-9905-4eeb1e03db85,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.082535 kubelet[2558]: E0913 00:17:47.082477 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.082767 kubelet[2558]: E0913 00:17:47.082744 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c5768889-w4vfq" Sep 13 00:17:47.082962 kubelet[2558]: E0913 00:17:47.082876 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c5768889-w4vfq" Sep 13 00:17:47.083297 kubelet[2558]: E0913 00:17:47.083059 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59c5768889-w4vfq_calico-system(0cfe0c86-7016-4d90-9905-4eeb1e03db85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59c5768889-w4vfq_calico-system(0cfe0c86-7016-4d90-9905-4eeb1e03db85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c5768889-w4vfq" podUID="0cfe0c86-7016-4d90-9905-4eeb1e03db85" Sep 13 00:17:47.100062 containerd[1462]: time="2025-09-13T00:17:47.099988091Z" level=error msg="Failed to destroy network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.101376 containerd[1462]: time="2025-09-13T00:17:47.101329659Z" level=error msg="encountered an error cleaning up failed sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.101445 containerd[1462]: time="2025-09-13T00:17:47.101402666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-svmbp,Uid:6748f3df-f396-46d5-b632-65b0d3fe85e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.103491 kubelet[2558]: E0913 00:17:47.103437 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.103615 kubelet[2558]: E0913 00:17:47.103521 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567664f8d-svmbp" Sep 13 00:17:47.103658 kubelet[2558]: E0913 00:17:47.103612 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567664f8d-svmbp" Sep 13 00:17:47.103724 kubelet[2558]: E0913 00:17:47.103687 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5567664f8d-svmbp_calico-apiserver(6748f3df-f396-46d5-b632-65b0d3fe85e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5567664f8d-svmbp_calico-apiserver(6748f3df-f396-46d5-b632-65b0d3fe85e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5567664f8d-svmbp" podUID="6748f3df-f396-46d5-b632-65b0d3fe85e1" Sep 13 00:17:47.132350 containerd[1462]: time="2025-09-13T00:17:47.132278782Z" level=error msg="Failed to destroy network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.132772 containerd[1462]: time="2025-09-13T00:17:47.132741451Z" level=error msg="encountered an error cleaning up failed sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.132860 containerd[1462]: time="2025-09-13T00:17:47.132797877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dgk4p,Uid:1f424473-224d-4e51-9ca9-c442b0bc325d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.133131 kubelet[2558]: E0913 00:17:47.133076 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.133220 kubelet[2558]: E0913 00:17:47.133154 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-dgk4p" Sep 13 00:17:47.133220 kubelet[2558]: E0913 00:17:47.133178 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-dgk4p" Sep 13 00:17:47.133322 kubelet[2558]: E0913 00:17:47.133230 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-dgk4p_calico-system(1f424473-224d-4e51-9ca9-c442b0bc325d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-dgk4p_calico-system(1f424473-224d-4e51-9ca9-c442b0bc325d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-dgk4p" podUID="1f424473-224d-4e51-9ca9-c442b0bc325d" Sep 13 00:17:47.134917 containerd[1462]: time="2025-09-13T00:17:47.134842965Z" level=error msg="Failed to destroy network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.135435 containerd[1462]: time="2025-09-13T00:17:47.135391795Z" level=error msg="encountered an error cleaning up failed sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.135490 containerd[1462]: time="2025-09-13T00:17:47.135461135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg67g,Uid:631d57a2-dd3c-4c24-8d55-9feb2884e566,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.135769 kubelet[2558]: E0913 00:17:47.135742 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.135843 kubelet[2558]: E0913 00:17:47.135774 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bg67g" Sep 13 00:17:47.135843 kubelet[2558]: E0913 00:17:47.135793 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bg67g" Sep 13 00:17:47.135843 kubelet[2558]: E0913 00:17:47.135832 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bg67g_kube-system(631d57a2-dd3c-4c24-8d55-9feb2884e566)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bg67g_kube-system(631d57a2-dd3c-4c24-8d55-9feb2884e566)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bg67g" podUID="631d57a2-dd3c-4c24-8d55-9feb2884e566" Sep 13 00:17:47.167261 containerd[1462]: time="2025-09-13T00:17:47.167174854Z" level=error msg="Failed to destroy network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.167701 containerd[1462]: time="2025-09-13T00:17:47.167651769Z" level=error msg="encountered an error cleaning up failed sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.167757 containerd[1462]: time="2025-09-13T00:17:47.167724756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-nl824,Uid:42dd6ade-572a-4087-84d6-79c32851c332,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.168095 kubelet[2558]: E0913 00:17:47.168048 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.168159 kubelet[2558]: E0913 00:17:47.168119 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567664f8d-nl824" Sep 13 00:17:47.168159 kubelet[2558]: E0913 00:17:47.168143 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567664f8d-nl824" Sep 13 00:17:47.168219 kubelet[2558]: E0913 00:17:47.168197 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5567664f8d-nl824_calico-apiserver(42dd6ade-572a-4087-84d6-79c32851c332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5567664f8d-nl824_calico-apiserver(42dd6ade-572a-4087-84d6-79c32851c332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5567664f8d-nl824" podUID="42dd6ade-572a-4087-84d6-79c32851c332" Sep 13 00:17:47.265774 containerd[1462]: time="2025-09-13T00:17:47.265711394Z" level=error msg="Failed to destroy network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.266187 containerd[1462]: time="2025-09-13T00:17:47.266151751Z" level=error msg="encountered an error cleaning up failed sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.266251 containerd[1462]: time="2025-09-13T00:17:47.266205662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8667ffb7-b5pnn,Uid:4bc675b9-f8ab-4324-abb1-fe64dccc6391,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.266563 kubelet[2558]: E0913 00:17:47.266487 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.266724 kubelet[2558]: E0913 00:17:47.266587 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8667ffb7-b5pnn" Sep 13 00:17:47.266724 kubelet[2558]: E0913 00:17:47.266614 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8667ffb7-b5pnn" Sep 13 00:17:47.266724 kubelet[2558]: E0913 00:17:47.266679 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d8667ffb7-b5pnn_calico-system(4bc675b9-f8ab-4324-abb1-fe64dccc6391)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d8667ffb7-b5pnn_calico-system(4bc675b9-f8ab-4324-abb1-fe64dccc6391)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d8667ffb7-b5pnn" podUID="4bc675b9-f8ab-4324-abb1-fe64dccc6391" Sep 13 00:17:47.283889 containerd[1462]: time="2025-09-13T00:17:47.283808979Z" level=error msg="Failed to destroy network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.284426 containerd[1462]: time="2025-09-13T00:17:47.284373939Z" level=error msg="encountered an error cleaning up failed sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.284607 containerd[1462]: time="2025-09-13T00:17:47.284445865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4ctb,Uid:9994ca67-7eed-4733-95f7-6dbed4d7c37b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.284873 kubelet[2558]: E0913 00:17:47.284815 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.284953 kubelet[2558]: E0913 00:17:47.284908 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n4ctb" Sep 13 00:17:47.284953 kubelet[2558]: E0913 00:17:47.284940 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n4ctb" Sep 13 00:17:47.285045 kubelet[2558]: E0913 00:17:47.285013 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n4ctb_kube-system(9994ca67-7eed-4733-95f7-6dbed4d7c37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n4ctb_kube-system(9994ca67-7eed-4733-95f7-6dbed4d7c37b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n4ctb" podUID="9994ca67-7eed-4733-95f7-6dbed4d7c37b" Sep 13 00:17:47.455508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64-shm.mount: Deactivated successfully. Sep 13 00:17:47.638642 kubelet[2558]: I0913 00:17:47.638602 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:17:47.639272 containerd[1462]: time="2025-09-13T00:17:47.639224560Z" level=info msg="StopPodSandbox for \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\"" Sep 13 00:17:47.639604 containerd[1462]: time="2025-09-13T00:17:47.639576801Z" level=info msg="Ensure that sandbox d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584 in task-service has been cleanup successfully" Sep 13 00:17:47.639720 kubelet[2558]: I0913 00:17:47.639699 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:17:47.641234 containerd[1462]: time="2025-09-13T00:17:47.640158072Z" level=info msg="StopPodSandbox for \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\"" Sep 13 00:17:47.641234 containerd[1462]: time="2025-09-13T00:17:47.640331137Z" level=info msg="Ensure that sandbox 796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae in task-service has been cleanup successfully" Sep 13 00:17:47.641751 kubelet[2558]: I0913 00:17:47.641714 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:17:47.643067 kubelet[2558]: I0913 00:17:47.643042 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:17:47.643142 containerd[1462]: time="2025-09-13T00:17:47.643095805Z" level=info msg="StopPodSandbox for \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\"" Sep 13 00:17:47.644570 containerd[1462]: time="2025-09-13T00:17:47.643283197Z" level=info msg="Ensure that sandbox 70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd in task-service has been cleanup successfully" Sep 13 00:17:47.644570 containerd[1462]: time="2025-09-13T00:17:47.643669833Z" level=info msg="StopPodSandbox for \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\"" Sep 13 00:17:47.644570 containerd[1462]: time="2025-09-13T00:17:47.643844911Z" level=info msg="Ensure that sandbox 9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce in task-service has been cleanup successfully" Sep 13 00:17:47.647200 kubelet[2558]: I0913 00:17:47.647167 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:17:47.647892 containerd[1462]: time="2025-09-13T00:17:47.647833467Z" level=info msg="StopPodSandbox for \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\"" Sep 13 00:17:47.648099 containerd[1462]: time="2025-09-13T00:17:47.648076523Z" level=info msg="Ensure that sandbox 312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50 in task-service has been cleanup successfully" Sep 13 00:17:47.653279 kubelet[2558]: I0913 00:17:47.653222 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:17:47.654231 containerd[1462]: time="2025-09-13T00:17:47.654071635Z" level=info msg="StopPodSandbox for \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\"" Sep 13 00:17:47.654892 containerd[1462]: time="2025-09-13T00:17:47.654616056Z" level=info msg="Ensure that sandbox b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2 in task-service has been cleanup successfully" Sep 13 00:17:47.655460 kubelet[2558]: I0913 00:17:47.655082 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:17:47.655681 containerd[1462]: time="2025-09-13T00:17:47.655656379Z" level=info msg="StopPodSandbox for \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\"" Sep 13 00:17:47.655971 containerd[1462]: time="2025-09-13T00:17:47.655947385Z" level=info msg="Ensure that sandbox aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5 in task-service has been cleanup successfully" Sep 13 00:17:47.702404 containerd[1462]: time="2025-09-13T00:17:47.702327149Z" level=error msg="StopPodSandbox for \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\" failed" error="failed to destroy network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.703513 kubelet[2558]: E0913 00:17:47.702646 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:17:47.703513 kubelet[2558]: E0913 00:17:47.702707 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae"} Sep 13 00:17:47.703513 kubelet[2558]: E0913 00:17:47.702754 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42dd6ade-572a-4087-84d6-79c32851c332\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:47.703513 kubelet[2558]: E0913 00:17:47.702784 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42dd6ade-572a-4087-84d6-79c32851c332\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5567664f8d-nl824" podUID="42dd6ade-572a-4087-84d6-79c32851c332" Sep 13 00:17:47.705155 containerd[1462]: time="2025-09-13T00:17:47.705114300Z" level=error msg="StopPodSandbox for \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\" failed" error="failed to destroy network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.705362 kubelet[2558]: E0913 00:17:47.705318 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:17:47.705429 kubelet[2558]: E0913 00:17:47.705371 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584"} Sep 13 00:17:47.705429 kubelet[2558]: E0913 00:17:47.705404 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f424473-224d-4e51-9ca9-c442b0bc325d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:47.705526 kubelet[2558]: E0913 00:17:47.705433 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f424473-224d-4e51-9ca9-c442b0bc325d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-dgk4p" podUID="1f424473-224d-4e51-9ca9-c442b0bc325d" Sep 13 00:17:47.706072 containerd[1462]: time="2025-09-13T00:17:47.705940120Z" level=error msg="StopPodSandbox for \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\" failed" error="failed to destroy network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.706125 kubelet[2558]: E0913 00:17:47.706101 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:17:47.706176 kubelet[2558]: E0913 00:17:47.706127 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5"} Sep 13 00:17:47.706176 kubelet[2558]: E0913 00:17:47.706149 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9994ca67-7eed-4733-95f7-6dbed4d7c37b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:47.706259 kubelet[2558]: E0913 00:17:47.706175 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9994ca67-7eed-4733-95f7-6dbed4d7c37b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n4ctb" podUID="9994ca67-7eed-4733-95f7-6dbed4d7c37b" Sep 13 00:17:47.710538 containerd[1462]: time="2025-09-13T00:17:47.710483917Z" level=error msg="StopPodSandbox for \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\" failed" error="failed to destroy network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.712717 kubelet[2558]: E0913 00:17:47.712671 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:17:47.712805 kubelet[2558]: E0913 00:17:47.712757 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd"} Sep 13 00:17:47.714611 kubelet[2558]: E0913 00:17:47.712792 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6748f3df-f396-46d5-b632-65b0d3fe85e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:47.714611 kubelet[2558]: E0913 00:17:47.712846 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6748f3df-f396-46d5-b632-65b0d3fe85e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5567664f8d-svmbp" podUID="6748f3df-f396-46d5-b632-65b0d3fe85e1" Sep 13 00:17:47.719199 containerd[1462]: time="2025-09-13T00:17:47.719145913Z" level=error msg="StopPodSandbox for \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\" failed" error="failed to destroy network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.719615 kubelet[2558]: E0913 00:17:47.719436 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:17:47.719615 kubelet[2558]: E0913 00:17:47.719497 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce"} Sep 13 00:17:47.719615 kubelet[2558]: E0913 00:17:47.719526 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"631d57a2-dd3c-4c24-8d55-9feb2884e566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:47.719615 kubelet[2558]: E0913 00:17:47.719586 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"631d57a2-dd3c-4c24-8d55-9feb2884e566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bg67g" podUID="631d57a2-dd3c-4c24-8d55-9feb2884e566" Sep 13 00:17:47.722137 containerd[1462]: time="2025-09-13T00:17:47.722047949Z" level=error msg="StopPodSandbox for \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\" failed" error="failed to destroy network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.722229 containerd[1462]: time="2025-09-13T00:17:47.722130766Z" level=error msg="StopPodSandbox for \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\" failed" error="failed to destroy network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.722288 kubelet[2558]: E0913 00:17:47.722204 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:17:47.722340 kubelet[2558]: E0913 00:17:47.722291 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2"} Sep 13 00:17:47.722340 kubelet[2558]: E0913 00:17:47.722313 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:17:47.722403 kubelet[2558]: E0913 00:17:47.722346 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50"} Sep 13 00:17:47.722403 kubelet[2558]: E0913 00:17:47.722369 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bc675b9-f8ab-4324-abb1-fe64dccc6391\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:47.722403 kubelet[2558]: E0913 00:17:47.722387 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bc675b9-f8ab-4324-abb1-fe64dccc6391\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d8667ffb7-b5pnn" podUID="4bc675b9-f8ab-4324-abb1-fe64dccc6391" Sep 13 00:17:47.722403 kubelet[2558]: E0913 00:17:47.722320 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:47.722560 kubelet[2558]: E0913 00:17:47.722413 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c5768889-w4vfq" podUID="0cfe0c86-7016-4d90-9905-4eeb1e03db85" Sep 13 00:17:54.804235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923449247.mount: Deactivated successfully. Sep 13 00:17:56.608799 containerd[1462]: time="2025-09-13T00:17:56.608697422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:56.665489 containerd[1462]: time="2025-09-13T00:17:56.665399135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:17:56.670204 containerd[1462]: time="2025-09-13T00:17:56.670126077Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:56.675367 containerd[1462]: time="2025-09-13T00:17:56.675305987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:56.675863 containerd[1462]: time="2025-09-13T00:17:56.675837016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.036211221s" Sep 13 00:17:56.675935 containerd[1462]: time="2025-09-13T00:17:56.675868557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:17:56.695311 containerd[1462]: time="2025-09-13T00:17:56.695256406Z" level=info msg="CreateContainer within sandbox \"2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:17:57.164122 containerd[1462]: time="2025-09-13T00:17:57.164021928Z" level=info msg="CreateContainer within sandbox \"2927d5e29359a76eda4646e276878f1d3380d078ac71132880bf5ac1d11b0ca7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8231e66b72303595275f9fdb3f332bcf7c0712fa98fd439d1e5190abea164523\"" Sep 13 00:17:57.165138 containerd[1462]: time="2025-09-13T00:17:57.165066721Z" level=info msg="StartContainer for \"8231e66b72303595275f9fdb3f332bcf7c0712fa98fd439d1e5190abea164523\"" Sep 13 00:17:57.227813 systemd[1]: Started cri-containerd-8231e66b72303595275f9fdb3f332bcf7c0712fa98fd439d1e5190abea164523.scope - libcontainer container 8231e66b72303595275f9fdb3f332bcf7c0712fa98fd439d1e5190abea164523. Sep 13 00:17:57.273526 containerd[1462]: time="2025-09-13T00:17:57.273471148Z" level=info msg="StartContainer for \"8231e66b72303595275f9fdb3f332bcf7c0712fa98fd439d1e5190abea164523\" returns successfully" Sep 13 00:17:57.375700 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:17:57.376322 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:17:57.518707 containerd[1462]: time="2025-09-13T00:17:57.518640361Z" level=info msg="StopPodSandbox for \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\"" Sep 13 00:17:57.701983 kubelet[2558]: I0913 00:17:57.701019 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f8rcf" podStartSLOduration=1.5742881469999999 podStartE2EDuration="25.700995605s" podCreationTimestamp="2025-09-13 00:17:32 +0000 UTC" firstStartedPulling="2025-09-13 00:17:32.550058469 +0000 UTC m=+22.393290709" lastFinishedPulling="2025-09-13 00:17:56.676765937 +0000 UTC m=+46.519998167" observedRunningTime="2025-09-13 00:17:57.699112359 +0000 UTC m=+47.542344589" watchObservedRunningTime="2025-09-13 00:17:57.700995605 +0000 UTC m=+47.544227835" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.600 [INFO][3844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.601 [INFO][3844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" iface="eth0" netns="/var/run/netns/cni-599640f0-b829-c506-6625-9447b62cbdda" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.601 [INFO][3844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" iface="eth0" netns="/var/run/netns/cni-599640f0-b829-c506-6625-9447b62cbdda" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.602 [INFO][3844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" iface="eth0" netns="/var/run/netns/cni-599640f0-b829-c506-6625-9447b62cbdda" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.602 [INFO][3844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.602 [INFO][3844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.680 [INFO][3853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.683 [INFO][3853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.684 [INFO][3853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.694 [WARNING][3853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.694 [INFO][3853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.699 [INFO][3853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:57.706725 containerd[1462]: 2025-09-13 00:17:57.703 [INFO][3844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:17:57.708283 containerd[1462]: time="2025-09-13T00:17:57.707664510Z" level=info msg="TearDown network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\" successfully" Sep 13 00:17:57.708283 containerd[1462]: time="2025-09-13T00:17:57.707736800Z" level=info msg="StopPodSandbox for \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\" returns successfully" Sep 13 00:17:57.709924 systemd[1]: run-netns-cni\x2d599640f0\x2db829\x2dc506\x2d6625\x2d9447b62cbdda.mount: Deactivated successfully. Sep 13 00:17:57.817332 kubelet[2558]: I0913 00:17:57.815229 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-backend-key-pair\") pod \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\" (UID: \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\") " Sep 13 00:17:57.817332 kubelet[2558]: I0913 00:17:57.815302 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xrzs\" (UniqueName: \"kubernetes.io/projected/0cfe0c86-7016-4d90-9905-4eeb1e03db85-kube-api-access-7xrzs\") pod \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\" (UID: \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\") " Sep 13 00:17:57.817332 kubelet[2558]: I0913 00:17:57.815353 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-ca-bundle\") pod \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\" (UID: \"0cfe0c86-7016-4d90-9905-4eeb1e03db85\") " Sep 13 00:17:57.817332 kubelet[2558]: I0913 00:17:57.815972 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0cfe0c86-7016-4d90-9905-4eeb1e03db85" (UID: "0cfe0c86-7016-4d90-9905-4eeb1e03db85"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:17:57.823605 kubelet[2558]: I0913 00:17:57.822604 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cfe0c86-7016-4d90-9905-4eeb1e03db85-kube-api-access-7xrzs" (OuterVolumeSpecName: "kube-api-access-7xrzs") pod "0cfe0c86-7016-4d90-9905-4eeb1e03db85" (UID: "0cfe0c86-7016-4d90-9905-4eeb1e03db85"). InnerVolumeSpecName "kube-api-access-7xrzs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:17:57.824518 kubelet[2558]: I0913 00:17:57.824464 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0cfe0c86-7016-4d90-9905-4eeb1e03db85" (UID: "0cfe0c86-7016-4d90-9905-4eeb1e03db85"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:17:57.827641 systemd[1]: var-lib-kubelet-pods-0cfe0c86\x2d7016\x2d4d90\x2d9905\x2d4eeb1e03db85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7xrzs.mount: Deactivated successfully. Sep 13 00:17:57.827796 systemd[1]: var-lib-kubelet-pods-0cfe0c86\x2d7016\x2d4d90\x2d9905\x2d4eeb1e03db85-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:17:57.915700 kubelet[2558]: I0913 00:17:57.915635 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:17:57.915700 kubelet[2558]: I0913 00:17:57.915676 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xrzs\" (UniqueName: \"kubernetes.io/projected/0cfe0c86-7016-4d90-9905-4eeb1e03db85-kube-api-access-7xrzs\") on node \"localhost\" DevicePath \"\"" Sep 13 00:17:57.915700 kubelet[2558]: I0913 00:17:57.915688 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cfe0c86-7016-4d90-9905-4eeb1e03db85-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:17:58.271088 containerd[1462]: time="2025-09-13T00:17:58.271020855Z" level=info msg="StopPodSandbox for \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\"" Sep 13 00:17:58.279295 systemd[1]: Removed slice kubepods-besteffort-pod0cfe0c86_7016_4d90_9905_4eeb1e03db85.slice - libcontainer container kubepods-besteffort-pod0cfe0c86_7016_4d90_9905_4eeb1e03db85.slice. Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.315 [INFO][3909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.315 [INFO][3909] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" iface="eth0" netns="/var/run/netns/cni-6cf0934f-65e1-0fdb-d108-dcbd2f49f35b" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.315 [INFO][3909] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" iface="eth0" netns="/var/run/netns/cni-6cf0934f-65e1-0fdb-d108-dcbd2f49f35b" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.315 [INFO][3909] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" iface="eth0" netns="/var/run/netns/cni-6cf0934f-65e1-0fdb-d108-dcbd2f49f35b" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.316 [INFO][3909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.316 [INFO][3909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.340 [INFO][3917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.340 [INFO][3917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.340 [INFO][3917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.347 [WARNING][3917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.347 [INFO][3917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.348 [INFO][3917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:58.354396 containerd[1462]: 2025-09-13 00:17:58.351 [INFO][3909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:17:58.354943 containerd[1462]: time="2025-09-13T00:17:58.354679761Z" level=info msg="TearDown network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\" successfully" Sep 13 00:17:58.354943 containerd[1462]: time="2025-09-13T00:17:58.354716452Z" level=info msg="StopPodSandbox for \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\" returns successfully" Sep 13 00:17:58.355536 containerd[1462]: time="2025-09-13T00:17:58.355505279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-svmbp,Uid:6748f3df-f396-46d5-b632-65b0d3fe85e1,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:17:58.357951 systemd[1]: run-netns-cni\x2d6cf0934f\x2d65e1\x2d0fdb\x2dd108\x2ddcbd2f49f35b.mount: Deactivated successfully. Sep 13 00:17:58.476893 systemd-networkd[1396]: cali18e193246c9: Link UP Sep 13 00:17:58.477419 systemd-networkd[1396]: cali18e193246c9: Gained carrier Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.394 [INFO][3925] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.405 [INFO][3925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0 calico-apiserver-5567664f8d- calico-apiserver 6748f3df-f396-46d5-b632-65b0d3fe85e1 979 0 2025-09-13 00:17:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5567664f8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5567664f8d-svmbp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali18e193246c9 [] [] }} ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.405 [INFO][3925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.431 [INFO][3940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" HandleID="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.431 [INFO][3940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" HandleID="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5567664f8d-svmbp", "timestamp":"2025-09-13 00:17:58.431566504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.431 [INFO][3940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.432 [INFO][3940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.432 [INFO][3940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.438 [INFO][3940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.443 [INFO][3940] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.447 [INFO][3940] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.448 [INFO][3940] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.450 [INFO][3940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.450 [INFO][3940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.451 [INFO][3940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.459 [INFO][3940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.465 [INFO][3940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.465 [INFO][3940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" host="localhost" Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.465 [INFO][3940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:58.495965 containerd[1462]: 2025-09-13 00:17:58.465 [INFO][3940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" HandleID="k8s-pod-network.68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.496710 containerd[1462]: 2025-09-13 00:17:58.469 [INFO][3925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6748f3df-f396-46d5-b632-65b0d3fe85e1", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5567664f8d-svmbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18e193246c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:58.496710 containerd[1462]: 2025-09-13 00:17:58.469 [INFO][3925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.496710 containerd[1462]: 2025-09-13 00:17:58.469 [INFO][3925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18e193246c9 ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.496710 containerd[1462]: 2025-09-13 00:17:58.477 [INFO][3925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.496710 containerd[1462]: 2025-09-13 00:17:58.478 [INFO][3925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6748f3df-f396-46d5-b632-65b0d3fe85e1", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e", Pod:"calico-apiserver-5567664f8d-svmbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18e193246c9", MAC:"06:27:35:64:ee:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:58.496710 containerd[1462]: 2025-09-13 00:17:58.491 [INFO][3925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-svmbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:17:58.525815 containerd[1462]: time="2025-09-13T00:17:58.525622707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:58.525977 containerd[1462]: time="2025-09-13T00:17:58.525694336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:58.525977 containerd[1462]: time="2025-09-13T00:17:58.525709165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:58.526690 containerd[1462]: time="2025-09-13T00:17:58.526626320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:58.548716 systemd[1]: Started cri-containerd-68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e.scope - libcontainer container 68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e. Sep 13 00:17:58.563136 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:58.589287 containerd[1462]: time="2025-09-13T00:17:58.589231330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-svmbp,Uid:6748f3df-f396-46d5-b632-65b0d3fe85e1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e\"" Sep 13 00:17:58.591590 containerd[1462]: time="2025-09-13T00:17:58.590928223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:17:58.737159 systemd[1]: run-containerd-runc-k8s.io-8231e66b72303595275f9fdb3f332bcf7c0712fa98fd439d1e5190abea164523-runc.THvPn2.mount: Deactivated successfully. Sep 13 00:17:58.776788 systemd[1]: Created slice kubepods-besteffort-podaafdd665_247c_4e1d_b0e1_42a9076032b7.slice - libcontainer container kubepods-besteffort-podaafdd665_247c_4e1d_b0e1_42a9076032b7.slice. Sep 13 00:17:58.924915 kubelet[2558]: I0913 00:17:58.924822 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aafdd665-247c-4e1d-b0e1-42a9076032b7-whisker-backend-key-pair\") pod \"whisker-84ddb6c6dd-fp5jn\" (UID: \"aafdd665-247c-4e1d-b0e1-42a9076032b7\") " pod="calico-system/whisker-84ddb6c6dd-fp5jn" Sep 13 00:17:58.924915 kubelet[2558]: I0913 00:17:58.924909 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aafdd665-247c-4e1d-b0e1-42a9076032b7-whisker-ca-bundle\") pod \"whisker-84ddb6c6dd-fp5jn\" (UID: \"aafdd665-247c-4e1d-b0e1-42a9076032b7\") " pod="calico-system/whisker-84ddb6c6dd-fp5jn" Sep 13 00:17:58.925663 kubelet[2558]: I0913 00:17:58.925134 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qw76\" (UniqueName: \"kubernetes.io/projected/aafdd665-247c-4e1d-b0e1-42a9076032b7-kube-api-access-5qw76\") pod \"whisker-84ddb6c6dd-fp5jn\" (UID: \"aafdd665-247c-4e1d-b0e1-42a9076032b7\") " pod="calico-system/whisker-84ddb6c6dd-fp5jn" Sep 13 00:17:59.034469 kernel: bpftool[4128]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:17:59.082319 containerd[1462]: time="2025-09-13T00:17:59.082276991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84ddb6c6dd-fp5jn,Uid:aafdd665-247c-4e1d-b0e1-42a9076032b7,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:59.228588 systemd-networkd[1396]: cali1ed4a180fa6: Link UP Sep 13 00:17:59.229326 systemd-networkd[1396]: cali1ed4a180fa6: Gained carrier Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.150 [INFO][4135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0 whisker-84ddb6c6dd- calico-system aafdd665-247c-4e1d-b0e1-42a9076032b7 998 0 2025-09-13 00:17:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84ddb6c6dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84ddb6c6dd-fp5jn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1ed4a180fa6 [] [] }} ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.150 [INFO][4135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.183 [INFO][4167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" HandleID="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Workload="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.183 [INFO][4167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" HandleID="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Workload="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84ddb6c6dd-fp5jn", "timestamp":"2025-09-13 00:17:59.183079353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.183 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.183 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.183 [INFO][4167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.190 [INFO][4167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.197 [INFO][4167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.203 [INFO][4167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.205 [INFO][4167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.207 [INFO][4167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.208 [INFO][4167] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.209 [INFO][4167] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82 Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.214 [INFO][4167] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.221 [INFO][4167] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.221 [INFO][4167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" host="localhost" Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.221 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:59.250022 containerd[1462]: 2025-09-13 00:17:59.221 [INFO][4167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" HandleID="k8s-pod-network.e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Workload="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" Sep 13 00:17:59.250863 containerd[1462]: 2025-09-13 00:17:59.225 [INFO][4135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0", GenerateName:"whisker-84ddb6c6dd-", Namespace:"calico-system", SelfLink:"", UID:"aafdd665-247c-4e1d-b0e1-42a9076032b7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84ddb6c6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84ddb6c6dd-fp5jn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ed4a180fa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:59.250863 containerd[1462]: 2025-09-13 00:17:59.226 [INFO][4135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" Sep 13 00:17:59.250863 containerd[1462]: 2025-09-13 00:17:59.226 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ed4a180fa6 ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" Sep 13 00:17:59.250863 containerd[1462]: 2025-09-13 00:17:59.230 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" Sep 13 00:17:59.250863 containerd[1462]: 2025-09-13 00:17:59.230 [INFO][4135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0", GenerateName:"whisker-84ddb6c6dd-", Namespace:"calico-system", SelfLink:"", UID:"aafdd665-247c-4e1d-b0e1-42a9076032b7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84ddb6c6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82", Pod:"whisker-84ddb6c6dd-fp5jn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ed4a180fa6", MAC:"ce:d3:6e:ec:d5:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:59.250863 containerd[1462]: 2025-09-13 00:17:59.245 [INFO][4135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82" Namespace="calico-system" Pod="whisker-84ddb6c6dd-fp5jn" WorkloadEndpoint="localhost-k8s-whisker--84ddb6c6dd--fp5jn-eth0" Sep 13 00:17:59.269758 containerd[1462]: time="2025-09-13T00:17:59.269711531Z" level=info msg="StopPodSandbox for \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\"" Sep 13 00:17:59.271563 containerd[1462]: time="2025-09-13T00:17:59.270090223Z" level=info msg="StopPodSandbox for \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\"" Sep 13 00:17:59.280992 containerd[1462]: time="2025-09-13T00:17:59.279645336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:59.280992 containerd[1462]: time="2025-09-13T00:17:59.280683953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:59.280992 containerd[1462]: time="2025-09-13T00:17:59.280698271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:59.280992 containerd[1462]: time="2025-09-13T00:17:59.280831909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:59.304139 systemd[1]: Started cri-containerd-e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82.scope - libcontainer container e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82. Sep 13 00:17:59.322415 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:59.361248 containerd[1462]: time="2025-09-13T00:17:59.359651244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84ddb6c6dd-fp5jn,Uid:aafdd665-247c-4e1d-b0e1-42a9076032b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82\"" Sep 13 00:17:59.396019 systemd-networkd[1396]: vxlan.calico: Link UP Sep 13 00:17:59.396033 systemd-networkd[1396]: vxlan.calico: Gained carrier Sep 13 00:17:59.536755 systemd-networkd[1396]: cali18e193246c9: Gained IPv6LL Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.503 [INFO][4222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.503 [INFO][4222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" iface="eth0" netns="/var/run/netns/cni-4b436347-9300-d9c0-6334-7f13839e886a" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.506 [INFO][4222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" iface="eth0" netns="/var/run/netns/cni-4b436347-9300-d9c0-6334-7f13839e886a" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.507 [INFO][4222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" iface="eth0" netns="/var/run/netns/cni-4b436347-9300-d9c0-6334-7f13839e886a" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.507 [INFO][4222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.507 [INFO][4222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.541 [INFO][4296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.541 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.541 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.583 [WARNING][4296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.583 [INFO][4296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.586 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:59.599340 containerd[1462]: 2025-09-13 00:17:59.596 [INFO][4222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:17:59.599876 containerd[1462]: time="2025-09-13T00:17:59.599589895Z" level=info msg="TearDown network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\" successfully" Sep 13 00:17:59.599876 containerd[1462]: time="2025-09-13T00:17:59.599619351Z" level=info msg="StopPodSandbox for \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\" returns successfully" Sep 13 00:17:59.600605 containerd[1462]: time="2025-09-13T00:17:59.600570271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-nl824,Uid:42dd6ade-572a-4087-84d6-79c32851c332,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.547 [INFO][4221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.547 [INFO][4221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" iface="eth0" netns="/var/run/netns/cni-5a8b2f41-6dd8-1a59-664c-b3baaaf73bc0" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.548 [INFO][4221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" iface="eth0" netns="/var/run/netns/cni-5a8b2f41-6dd8-1a59-664c-b3baaaf73bc0" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.549 [INFO][4221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" iface="eth0" netns="/var/run/netns/cni-5a8b2f41-6dd8-1a59-664c-b3baaaf73bc0" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.550 [INFO][4221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.550 [INFO][4221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.580 [INFO][4305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.580 [INFO][4305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.586 [INFO][4305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.676 [WARNING][4305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.677 [INFO][4305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.680 [INFO][4305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:59.691272 containerd[1462]: 2025-09-13 00:17:59.683 [INFO][4221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:17:59.691475 systemd[1]: run-netns-cni\x2d4b436347\x2d9300\x2dd9c0\x2d6334\x2d7f13839e886a.mount: Deactivated successfully. Sep 13 00:17:59.693720 containerd[1462]: time="2025-09-13T00:17:59.693675652Z" level=info msg="TearDown network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\" successfully" Sep 13 00:17:59.693720 containerd[1462]: time="2025-09-13T00:17:59.693714036Z" level=info msg="StopPodSandbox for \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\" returns successfully" Sep 13 00:17:59.694466 containerd[1462]: time="2025-09-13T00:17:59.694422736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dgk4p,Uid:1f424473-224d-4e51-9ca9-c442b0bc325d,Namespace:calico-system,Attempt:1,}" Sep 13 00:17:59.697201 systemd[1]: run-netns-cni\x2d5a8b2f41\x2d6dd8\x2d1a59\x2d664c\x2db3baaaf73bc0.mount: Deactivated successfully. Sep 13 00:18:00.040741 systemd-networkd[1396]: cali2cb48d2a36a: Link UP Sep 13 00:18:00.041735 systemd-networkd[1396]: cali2cb48d2a36a: Gained carrier Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:17:59.941 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0 calico-apiserver-5567664f8d- calico-apiserver 42dd6ade-572a-4087-84d6-79c32851c332 1007 0 2025-09-13 00:17:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5567664f8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5567664f8d-nl824 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cb48d2a36a [] [] }} ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:17:59.941 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:17:59.990 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" HandleID="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:17:59.990 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" HandleID="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325f50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5567664f8d-nl824", "timestamp":"2025-09-13 00:17:59.990333543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:17:59.990 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:17:59.990 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:17:59.990 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.001 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.007 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.012 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.014 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.016 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.016 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.019 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.023 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.030 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.030 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" host="localhost" Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.031 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:00.058254 containerd[1462]: 2025-09-13 00:18:00.031 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" HandleID="k8s-pod-network.3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:00.060049 containerd[1462]: 2025-09-13 00:18:00.037 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"42dd6ade-572a-4087-84d6-79c32851c332", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5567664f8d-nl824", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cb48d2a36a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:00.060049 containerd[1462]: 2025-09-13 00:18:00.037 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:00.060049 containerd[1462]: 2025-09-13 00:18:00.037 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cb48d2a36a ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:00.060049 containerd[1462]: 2025-09-13 00:18:00.041 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:00.060049 containerd[1462]: 2025-09-13 00:18:00.042 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"42dd6ade-572a-4087-84d6-79c32851c332", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded", Pod:"calico-apiserver-5567664f8d-nl824", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cb48d2a36a", MAC:"a2:d9:77:46:dd:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:00.060049 containerd[1462]: 2025-09-13 00:18:00.054 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded" Namespace="calico-apiserver" Pod="calico-apiserver-5567664f8d-nl824" WorkloadEndpoint="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:00.090576 containerd[1462]: time="2025-09-13T00:18:00.089216607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:18:00.090576 containerd[1462]: time="2025-09-13T00:18:00.089304106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:18:00.090576 containerd[1462]: time="2025-09-13T00:18:00.089322211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:00.090576 containerd[1462]: time="2025-09-13T00:18:00.089468574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:00.134812 systemd[1]: Started cri-containerd-3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded.scope - libcontainer container 3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded. Sep 13 00:18:00.163943 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:18:00.180429 systemd-networkd[1396]: cali83dbf31f82f: Link UP Sep 13 00:18:00.182467 systemd-networkd[1396]: cali83dbf31f82f: Gained carrier Sep 13 00:18:00.207507 containerd[1462]: time="2025-09-13T00:18:00.207432657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567664f8d-nl824,Uid:42dd6ade-572a-4087-84d6-79c32851c332,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded\"" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:17:59.960 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--dgk4p-eth0 goldmane-54d579b49d- calico-system 1f424473-224d-4e51-9ca9-c442b0bc325d 1009 0 2025-09-13 00:17:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-dgk4p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali83dbf31f82f [] [] }} ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:17:59.960 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.009 [INFO][4389] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" HandleID="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.009 [INFO][4389] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" HandleID="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-dgk4p", "timestamp":"2025-09-13 00:18:00.009117345 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.009 [INFO][4389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.030 [INFO][4389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.031 [INFO][4389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.114 [INFO][4389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.123 [INFO][4389] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.137 [INFO][4389] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.143 [INFO][4389] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.147 [INFO][4389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.147 [INFO][4389] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.149 [INFO][4389] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9 Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.158 [INFO][4389] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.168 [INFO][4389] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.168 [INFO][4389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" host="localhost" Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.168 [INFO][4389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:00.210424 containerd[1462]: 2025-09-13 00:18:00.168 [INFO][4389] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" HandleID="k8s-pod-network.d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:00.211212 containerd[1462]: 2025-09-13 00:18:00.174 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--dgk4p-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1f424473-224d-4e51-9ca9-c442b0bc325d", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-dgk4p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83dbf31f82f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:00.211212 containerd[1462]: 2025-09-13 00:18:00.175 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:00.211212 containerd[1462]: 2025-09-13 00:18:00.175 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83dbf31f82f ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:00.211212 containerd[1462]: 2025-09-13 00:18:00.183 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:00.211212 containerd[1462]: 2025-09-13 00:18:00.183 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--dgk4p-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1f424473-224d-4e51-9ca9-c442b0bc325d", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9", Pod:"goldmane-54d579b49d-dgk4p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83dbf31f82f", MAC:"c6:cc:ab:76:5a:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:00.211212 containerd[1462]: 2025-09-13 00:18:00.200 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9" Namespace="calico-system" Pod="goldmane-54d579b49d-dgk4p" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:00.240905 containerd[1462]: time="2025-09-13T00:18:00.239699755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:18:00.240905 containerd[1462]: time="2025-09-13T00:18:00.240673987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:18:00.240905 containerd[1462]: time="2025-09-13T00:18:00.240694757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:00.240905 containerd[1462]: time="2025-09-13T00:18:00.240797345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:00.271749 kubelet[2558]: I0913 00:18:00.271696 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cfe0c86-7016-4d90-9905-4eeb1e03db85" path="/var/lib/kubelet/pods/0cfe0c86-7016-4d90-9905-4eeb1e03db85/volumes" Sep 13 00:18:00.272842 systemd[1]: Started cri-containerd-d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9.scope - libcontainer container d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9. Sep 13 00:18:00.288770 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:18:00.317317 containerd[1462]: time="2025-09-13T00:18:00.317155541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dgk4p,Uid:1f424473-224d-4e51-9ca9-c442b0bc325d,Namespace:calico-system,Attempt:1,} returns sandbox id \"d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9\"" Sep 13 00:18:01.199826 systemd-networkd[1396]: cali1ed4a180fa6: Gained IPv6LL Sep 13 00:18:01.269603 containerd[1462]: time="2025-09-13T00:18:01.269522415Z" level=info msg="StopPodSandbox for \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\"" Sep 13 00:18:01.270478 containerd[1462]: time="2025-09-13T00:18:01.269652998Z" level=info msg="StopPodSandbox for \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\"" Sep 13 00:18:01.329178 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL Sep 13 00:18:01.522853 systemd-networkd[1396]: cali2cb48d2a36a: Gained IPv6LL Sep 13 00:18:01.647788 systemd-networkd[1396]: cali83dbf31f82f: Gained IPv6LL Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.520 [INFO][4528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.521 [INFO][4528] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" iface="eth0" netns="/var/run/netns/cni-71eb092a-66ff-6b30-065c-a28cf36a38de" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.521 [INFO][4528] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" iface="eth0" netns="/var/run/netns/cni-71eb092a-66ff-6b30-065c-a28cf36a38de" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.522 [INFO][4528] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" iface="eth0" netns="/var/run/netns/cni-71eb092a-66ff-6b30-065c-a28cf36a38de" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.522 [INFO][4528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.522 [INFO][4528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.554 [INFO][4541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.555 [INFO][4541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.555 [INFO][4541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.713 [WARNING][4541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.713 [INFO][4541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.716 [INFO][4541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:01.722825 containerd[1462]: 2025-09-13 00:18:01.719 [INFO][4528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:01.729036 systemd[1]: run-netns-cni\x2d71eb092a\x2d66ff\x2d6b30\x2d065c\x2da28cf36a38de.mount: Deactivated successfully. Sep 13 00:18:01.730029 containerd[1462]: time="2025-09-13T00:18:01.729971099Z" level=info msg="TearDown network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\" successfully" Sep 13 00:18:01.730029 containerd[1462]: time="2025-09-13T00:18:01.730021426Z" level=info msg="StopPodSandbox for \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\" returns successfully" Sep 13 00:18:01.730574 kubelet[2558]: E0913 00:18:01.730507 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:01.732644 containerd[1462]: time="2025-09-13T00:18:01.732602738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg67g,Uid:631d57a2-dd3c-4c24-8d55-9feb2884e566,Namespace:kube-system,Attempt:1,}" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.711 [INFO][4526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.712 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" iface="eth0" netns="/var/run/netns/cni-82c3e7fe-321e-e1b0-f6db-a3665f53cb50" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.713 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" iface="eth0" netns="/var/run/netns/cni-82c3e7fe-321e-e1b0-f6db-a3665f53cb50" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.713 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" iface="eth0" netns="/var/run/netns/cni-82c3e7fe-321e-e1b0-f6db-a3665f53cb50" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.714 [INFO][4526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.714 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.939 [INFO][4551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.939 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.939 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.947 [WARNING][4551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.947 [INFO][4551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.949 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:01.955752 containerd[1462]: 2025-09-13 00:18:01.952 [INFO][4526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:01.955752 containerd[1462]: time="2025-09-13T00:18:01.955700410Z" level=info msg="TearDown network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\" successfully" Sep 13 00:18:01.955752 containerd[1462]: time="2025-09-13T00:18:01.955740508Z" level=info msg="StopPodSandbox for \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\" returns successfully" Sep 13 00:18:01.957041 kubelet[2558]: E0913 00:18:01.956991 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:01.958102 containerd[1462]: time="2025-09-13T00:18:01.958054213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4ctb,Uid:9994ca67-7eed-4733-95f7-6dbed4d7c37b,Namespace:kube-system,Attempt:1,}" Sep 13 00:18:01.959137 systemd[1]: run-netns-cni\x2d82c3e7fe\x2d321e\x2de1b0\x2df6db\x2da3665f53cb50.mount: Deactivated successfully. Sep 13 00:18:02.078277 containerd[1462]: time="2025-09-13T00:18:02.078216386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:02.080341 containerd[1462]: time="2025-09-13T00:18:02.079730535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:18:02.081696 containerd[1462]: time="2025-09-13T00:18:02.081662000Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:02.084183 containerd[1462]: time="2025-09-13T00:18:02.084140511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:02.084780 containerd[1462]: time="2025-09-13T00:18:02.084749144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.493791615s" Sep 13 00:18:02.084831 containerd[1462]: time="2025-09-13T00:18:02.084784923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:18:02.091716 containerd[1462]: time="2025-09-13T00:18:02.091658771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:18:02.097332 containerd[1462]: time="2025-09-13T00:18:02.097269852Z" level=info msg="CreateContainer within sandbox \"68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:18:02.119183 containerd[1462]: time="2025-09-13T00:18:02.118779746Z" level=info msg="CreateContainer within sandbox \"68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"93efe9e1f6470b1639ac01dbbb4c44276e96a8ccb8177b49429db30fe479b2ff\"" Sep 13 00:18:02.120566 containerd[1462]: time="2025-09-13T00:18:02.120407616Z" level=info msg="StartContainer for \"93efe9e1f6470b1639ac01dbbb4c44276e96a8ccb8177b49429db30fe479b2ff\"" Sep 13 00:18:02.178903 systemd[1]: Started cri-containerd-93efe9e1f6470b1639ac01dbbb4c44276e96a8ccb8177b49429db30fe479b2ff.scope - libcontainer container 93efe9e1f6470b1639ac01dbbb4c44276e96a8ccb8177b49429db30fe479b2ff. Sep 13 00:18:02.240656 systemd-networkd[1396]: cali0ac9f37a677: Link UP Sep 13 00:18:02.244482 systemd-networkd[1396]: cali0ac9f37a677: Gained carrier Sep 13 00:18:02.252071 containerd[1462]: time="2025-09-13T00:18:02.252014737Z" level=info msg="StartContainer for \"93efe9e1f6470b1639ac01dbbb4c44276e96a8ccb8177b49429db30fe479b2ff\" returns successfully" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.111 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bg67g-eth0 coredns-674b8bbfcf- kube-system 631d57a2-dd3c-4c24-8d55-9feb2884e566 1023 0 2025-09-13 00:17:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bg67g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0ac9f37a677 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.111 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.159 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" HandleID="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.160 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" HandleID="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000585150), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bg67g", "timestamp":"2025-09-13 00:18:02.159456126 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.160 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.160 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.160 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.180 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.191 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.202 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.207 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.211 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.212 [INFO][4593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.214 [INFO][4593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022 Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.219 [INFO][4593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.224 [INFO][4593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.225 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" host="localhost" Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.225 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:02.263741 containerd[1462]: 2025-09-13 00:18:02.225 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" HandleID="k8s-pod-network.517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:02.264360 containerd[1462]: 2025-09-13 00:18:02.229 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bg67g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"631d57a2-dd3c-4c24-8d55-9feb2884e566", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bg67g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ac9f37a677", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:02.264360 containerd[1462]: 2025-09-13 00:18:02.229 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:02.264360 containerd[1462]: 2025-09-13 00:18:02.229 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ac9f37a677 ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:02.264360 containerd[1462]: 2025-09-13 00:18:02.248 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:02.264360 containerd[1462]: 2025-09-13 00:18:02.250 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bg67g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"631d57a2-dd3c-4c24-8d55-9feb2884e566", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022", Pod:"coredns-674b8bbfcf-bg67g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ac9f37a677", MAC:"1a:e8:fc:a4:0f:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:02.264360 containerd[1462]: 2025-09-13 00:18:02.259 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg67g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:02.270230 containerd[1462]: time="2025-09-13T00:18:02.269773848Z" level=info msg="StopPodSandbox for \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\"" Sep 13 00:18:02.271463 containerd[1462]: time="2025-09-13T00:18:02.271419671Z" level=info msg="StopPodSandbox for \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\"" Sep 13 00:18:02.325207 containerd[1462]: time="2025-09-13T00:18:02.325002924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:18:02.325207 containerd[1462]: time="2025-09-13T00:18:02.325163073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:18:02.325402 containerd[1462]: time="2025-09-13T00:18:02.325280579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:02.325779 containerd[1462]: time="2025-09-13T00:18:02.325480946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:02.345927 systemd-networkd[1396]: calif93d59d5539: Link UP Sep 13 00:18:02.352302 systemd-networkd[1396]: calif93d59d5539: Gained carrier Sep 13 00:18:02.377827 systemd[1]: Started cri-containerd-517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022.scope - libcontainer container 517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022. Sep 13 00:18:02.395904 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:18:02.432359 containerd[1462]: time="2025-09-13T00:18:02.432316271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg67g,Uid:631d57a2-dd3c-4c24-8d55-9feb2884e566,Namespace:kube-system,Attempt:1,} returns sandbox id \"517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022\"" Sep 13 00:18:02.433649 kubelet[2558]: E0913 00:18:02.433449 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.185 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0 coredns-674b8bbfcf- kube-system 9994ca67-7eed-4733-95f7-6dbed4d7c37b 1024 0 2025-09-13 00:17:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-n4ctb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif93d59d5539 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.187 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.232 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" HandleID="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.232 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" HandleID="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324220), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-n4ctb", "timestamp":"2025-09-13 00:18:02.232305555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.233 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.233 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.233 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.281 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.291 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.298 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.300 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.302 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.303 [INFO][4628] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.304 [INFO][4628] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.311 [INFO][4628] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.319 [INFO][4628] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.319 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" host="localhost" Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.319 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:03.020677 containerd[1462]: 2025-09-13 00:18:02.320 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" HandleID="k8s-pod-network.02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:03.023014 containerd[1462]: 2025-09-13 00:18:02.334 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9994ca67-7eed-4733-95f7-6dbed4d7c37b", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-n4ctb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif93d59d5539", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:03.023014 containerd[1462]: 2025-09-13 00:18:02.334 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:03.023014 containerd[1462]: 2025-09-13 00:18:02.334 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif93d59d5539 ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:03.023014 containerd[1462]: 2025-09-13 00:18:02.351 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:03.023014 containerd[1462]: 2025-09-13 00:18:02.355 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9994ca67-7eed-4733-95f7-6dbed4d7c37b", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee", Pod:"coredns-674b8bbfcf-n4ctb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif93d59d5539", MAC:"be:ba:85:3e:ff:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:03.023014 containerd[1462]: 2025-09-13 00:18:03.016 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4ctb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.488 [INFO][4678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.489 [INFO][4678] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" iface="eth0" netns="/var/run/netns/cni-4daf57db-7599-a1e9-359e-60b908b0d82d" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.489 [INFO][4678] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" iface="eth0" netns="/var/run/netns/cni-4daf57db-7599-a1e9-359e-60b908b0d82d" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.489 [INFO][4678] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" iface="eth0" netns="/var/run/netns/cni-4daf57db-7599-a1e9-359e-60b908b0d82d" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.489 [INFO][4678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.489 [INFO][4678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.512 [INFO][4748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.513 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:02.513 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:03.011 [WARNING][4748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:03.011 [INFO][4748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:03.293 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:03.299318 containerd[1462]: 2025-09-13 00:18:03.296 [INFO][4678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:03.301284 containerd[1462]: time="2025-09-13T00:18:03.301228941Z" level=info msg="TearDown network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\" successfully" Sep 13 00:18:03.301284 containerd[1462]: time="2025-09-13T00:18:03.301277244Z" level=info msg="StopPodSandbox for \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\" returns successfully" Sep 13 00:18:03.302083 containerd[1462]: time="2025-09-13T00:18:03.302048620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8667ffb7-b5pnn,Uid:4bc675b9-f8ab-4324-abb1-fe64dccc6391,Namespace:calico-system,Attempt:1,}" Sep 13 00:18:03.304392 systemd[1]: run-netns-cni\x2d4daf57db\x2d7599\x2da1e9\x2d359e\x2d60b908b0d82d.mount: Deactivated successfully. Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.015 [INFO][4690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.016 [INFO][4690] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" iface="eth0" netns="/var/run/netns/cni-0ae17d86-4422-8081-4384-cc53d164c2ab" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.017 [INFO][4690] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" iface="eth0" netns="/var/run/netns/cni-0ae17d86-4422-8081-4384-cc53d164c2ab" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.017 [INFO][4690] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" iface="eth0" netns="/var/run/netns/cni-0ae17d86-4422-8081-4384-cc53d164c2ab" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.018 [INFO][4690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.018 [INFO][4690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.044 [INFO][4757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.045 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.293 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.359 [WARNING][4757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.359 [INFO][4757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.360 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:03.366434 containerd[1462]: 2025-09-13 00:18:03.363 [INFO][4690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:03.367146 containerd[1462]: time="2025-09-13T00:18:03.367103614Z" level=info msg="TearDown network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\" successfully" Sep 13 00:18:03.367146 containerd[1462]: time="2025-09-13T00:18:03.367134673Z" level=info msg="StopPodSandbox for \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\" returns successfully" Sep 13 00:18:03.368804 containerd[1462]: time="2025-09-13T00:18:03.368776518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnh9g,Uid:0bee5869-7316-4315-890e-b413da2035a5,Namespace:calico-system,Attempt:1,}" Sep 13 00:18:03.370050 systemd[1]: run-netns-cni\x2d0ae17d86\x2d4422\x2d8081\x2d4384\x2dcc53d164c2ab.mount: Deactivated successfully. Sep 13 00:18:03.631774 systemd-networkd[1396]: calif93d59d5539: Gained IPv6LL Sep 13 00:18:03.695752 systemd-networkd[1396]: cali0ac9f37a677: Gained IPv6LL Sep 13 00:18:03.746245 containerd[1462]: time="2025-09-13T00:18:03.746106505Z" level=info msg="CreateContainer within sandbox \"517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:18:03.822140 kubelet[2558]: I0913 00:18:03.821808 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5567664f8d-svmbp" podStartSLOduration=32.324545166 podStartE2EDuration="35.821785453s" podCreationTimestamp="2025-09-13 00:17:28 +0000 UTC" firstStartedPulling="2025-09-13 00:17:58.590715832 +0000 UTC m=+48.433948062" lastFinishedPulling="2025-09-13 00:18:02.087956119 +0000 UTC m=+51.931188349" observedRunningTime="2025-09-13 00:18:03.821395241 +0000 UTC m=+53.664627471" watchObservedRunningTime="2025-09-13 00:18:03.821785453 +0000 UTC m=+53.665017683" Sep 13 00:18:03.868249 containerd[1462]: time="2025-09-13T00:18:03.868087626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:18:03.868447 containerd[1462]: time="2025-09-13T00:18:03.868266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:18:03.868447 containerd[1462]: time="2025-09-13T00:18:03.868304203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:03.868447 containerd[1462]: time="2025-09-13T00:18:03.868424255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:03.920743 systemd[1]: Started cri-containerd-02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee.scope - libcontainer container 02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee. Sep 13 00:18:03.928048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452493367.mount: Deactivated successfully. Sep 13 00:18:03.940418 containerd[1462]: time="2025-09-13T00:18:03.940352555Z" level=info msg="CreateContainer within sandbox \"517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d56e69bca988871d98810ad51fcf8eb6940f5b9ad4b3d04feb83febf308f94d3\"" Sep 13 00:18:03.942814 containerd[1462]: time="2025-09-13T00:18:03.942760385Z" level=info msg="StartContainer for \"d56e69bca988871d98810ad51fcf8eb6940f5b9ad4b3d04feb83febf308f94d3\"" Sep 13 00:18:03.956671 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:18:04.030059 containerd[1462]: time="2025-09-13T00:18:04.029850555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4ctb,Uid:9994ca67-7eed-4733-95f7-6dbed4d7c37b,Namespace:kube-system,Attempt:1,} returns sandbox id \"02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee\"" Sep 13 00:18:04.032876 kubelet[2558]: E0913 00:18:04.031441 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:04.045706 systemd[1]: Started cri-containerd-d56e69bca988871d98810ad51fcf8eb6940f5b9ad4b3d04feb83febf308f94d3.scope - libcontainer container d56e69bca988871d98810ad51fcf8eb6940f5b9ad4b3d04feb83febf308f94d3. Sep 13 00:18:04.217877 containerd[1462]: time="2025-09-13T00:18:04.214861920Z" level=info msg="CreateContainer within sandbox \"02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:18:04.569932 containerd[1462]: time="2025-09-13T00:18:04.569702753Z" level=info msg="StartContainer for \"d56e69bca988871d98810ad51fcf8eb6940f5b9ad4b3d04feb83febf308f94d3\" returns successfully" Sep 13 00:18:04.569932 containerd[1462]: time="2025-09-13T00:18:04.569725807Z" level=info msg="CreateContainer within sandbox \"02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"935281f15d4fd583c0055cee16be0a3994518cdbfdd9f965248a67b99f35b6f9\"" Sep 13 00:18:04.570714 containerd[1462]: time="2025-09-13T00:18:04.570676268Z" level=info msg="StartContainer for \"935281f15d4fd583c0055cee16be0a3994518cdbfdd9f965248a67b99f35b6f9\"" Sep 13 00:18:04.604964 systemd[1]: Started cri-containerd-935281f15d4fd583c0055cee16be0a3994518cdbfdd9f965248a67b99f35b6f9.scope - libcontainer container 935281f15d4fd583c0055cee16be0a3994518cdbfdd9f965248a67b99f35b6f9. Sep 13 00:18:04.627809 systemd-networkd[1396]: cali44906cd0334: Link UP Sep 13 00:18:04.628787 systemd-networkd[1396]: cali44906cd0334: Gained carrier Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.017 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cnh9g-eth0 csi-node-driver- calico-system 0bee5869-7316-4315-890e-b413da2035a5 1041 0 2025-09-13 00:17:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cnh9g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali44906cd0334 [] [] }} ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.017 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.206 [INFO][4873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" HandleID="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.207 [INFO][4873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" HandleID="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cnh9g", "timestamp":"2025-09-13 00:18:04.206623484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.208 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.208 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.208 [INFO][4873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.223 [INFO][4873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.232 [INFO][4873] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.242 [INFO][4873] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.247 [INFO][4873] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.259 [INFO][4873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.259 [INFO][4873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.268 [INFO][4873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.557 [INFO][4873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.607 [INFO][4873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.607 [INFO][4873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" host="localhost" Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.607 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:04.661719 containerd[1462]: 2025-09-13 00:18:04.607 [INFO][4873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" HandleID="k8s-pod-network.ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:04.662379 containerd[1462]: 2025-09-13 00:18:04.619 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cnh9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0bee5869-7316-4315-890e-b413da2035a5", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cnh9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44906cd0334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:04.662379 containerd[1462]: 2025-09-13 00:18:04.620 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:04.662379 containerd[1462]: 2025-09-13 00:18:04.621 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44906cd0334 ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:04.662379 containerd[1462]: 2025-09-13 00:18:04.628 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:04.662379 containerd[1462]: 2025-09-13 00:18:04.629 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cnh9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0bee5869-7316-4315-890e-b413da2035a5", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf", Pod:"csi-node-driver-cnh9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44906cd0334", MAC:"b6:2d:8f:4d:10:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:04.662379 containerd[1462]: 2025-09-13 00:18:04.653 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf" Namespace="calico-system" Pod="csi-node-driver-cnh9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:04.682880 containerd[1462]: time="2025-09-13T00:18:04.682813933Z" level=info msg="StartContainer for \"935281f15d4fd583c0055cee16be0a3994518cdbfdd9f965248a67b99f35b6f9\" returns successfully" Sep 13 00:18:04.689904 systemd[1]: Started sshd@9-10.0.0.148:22-10.0.0.1:59396.service - OpenSSH per-connection server daemon (10.0.0.1:59396). Sep 13 00:18:04.705783 containerd[1462]: time="2025-09-13T00:18:04.704879166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:18:04.705783 containerd[1462]: time="2025-09-13T00:18:04.705675600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:18:04.705783 containerd[1462]: time="2025-09-13T00:18:04.705688736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:04.706730 containerd[1462]: time="2025-09-13T00:18:04.705937585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:04.715445 kubelet[2558]: E0913 00:18:04.715265 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:04.720056 kubelet[2558]: I0913 00:18:04.719788 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:18:04.721064 kubelet[2558]: E0913 00:18:04.720889 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:04.767799 systemd[1]: Started cri-containerd-ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf.scope - libcontainer container ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf. Sep 13 00:18:04.780356 sshd[4945]: Accepted publickey for core from 10.0.0.1 port 59396 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:04.787529 sshd[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:04.789137 systemd-networkd[1396]: cali2bcece92da1: Link UP Sep 13 00:18:04.793327 systemd-networkd[1396]: cali2bcece92da1: Gained carrier Sep 13 00:18:04.802639 systemd-logind[1446]: New session 10 of user core. Sep 13 00:18:04.805734 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:18:04.813596 kubelet[2558]: I0913 00:18:04.813491 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bg67g" podStartSLOduration=47.813468578 podStartE2EDuration="47.813468578s" podCreationTimestamp="2025-09-13 00:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:18:04.765775932 +0000 UTC m=+54.609008162" watchObservedRunningTime="2025-09-13 00:18:04.813468578 +0000 UTC m=+54.656700808" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.194 [INFO][4816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0 calico-kube-controllers-5d8667ffb7- calico-system 4bc675b9-f8ab-4324-abb1-fe64dccc6391 1040 0 2025-09-13 00:17:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d8667ffb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d8667ffb7-b5pnn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2bcece92da1 [] [] }} ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.194 [INFO][4816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.274 [INFO][4882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" HandleID="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.274 [INFO][4882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" HandleID="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d8667ffb7-b5pnn", "timestamp":"2025-09-13 00:18:04.273988997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.274 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.607 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.608 [INFO][4882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.626 [INFO][4882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.659 [INFO][4882] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.672 [INFO][4882] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.679 [INFO][4882] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.683 [INFO][4882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.683 [INFO][4882] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.692 [INFO][4882] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9 Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.714 [INFO][4882] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.740 [INFO][4882] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.740 [INFO][4882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" host="localhost" Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.740 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:04.823750 containerd[1462]: 2025-09-13 00:18:04.740 [INFO][4882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" HandleID="k8s-pod-network.190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:04.825744 containerd[1462]: 2025-09-13 00:18:04.770 [INFO][4816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0", GenerateName:"calico-kube-controllers-5d8667ffb7-", Namespace:"calico-system", SelfLink:"", UID:"4bc675b9-f8ab-4324-abb1-fe64dccc6391", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8667ffb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d8667ffb7-b5pnn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bcece92da1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:04.825744 containerd[1462]: 2025-09-13 00:18:04.772 [INFO][4816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:04.825744 containerd[1462]: 2025-09-13 00:18:04.772 [INFO][4816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bcece92da1 ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:04.825744 containerd[1462]: 2025-09-13 00:18:04.790 [INFO][4816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:04.825744 containerd[1462]: 2025-09-13 00:18:04.794 [INFO][4816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0", GenerateName:"calico-kube-controllers-5d8667ffb7-", Namespace:"calico-system", SelfLink:"", UID:"4bc675b9-f8ab-4324-abb1-fe64dccc6391", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8667ffb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9", Pod:"calico-kube-controllers-5d8667ffb7-b5pnn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bcece92da1", MAC:"36:fe:cd:b5:f8:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:04.825744 containerd[1462]: 2025-09-13 00:18:04.815 [INFO][4816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9" Namespace="calico-system" Pod="calico-kube-controllers-5d8667ffb7-b5pnn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:04.889530 containerd[1462]: time="2025-09-13T00:18:04.886890154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:18:04.889530 containerd[1462]: time="2025-09-13T00:18:04.886963916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:18:04.889530 containerd[1462]: time="2025-09-13T00:18:04.886977451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:04.889530 containerd[1462]: time="2025-09-13T00:18:04.887078136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:04.934922 systemd[1]: Started cri-containerd-190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9.scope - libcontainer container 190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9. Sep 13 00:18:04.945169 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:18:04.959728 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:18:04.980423 containerd[1462]: time="2025-09-13T00:18:04.980361772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnh9g,Uid:0bee5869-7316-4315-890e-b413da2035a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf\"" Sep 13 00:18:05.022478 containerd[1462]: time="2025-09-13T00:18:05.022392516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8667ffb7-b5pnn,Uid:4bc675b9-f8ab-4324-abb1-fe64dccc6391,Namespace:calico-system,Attempt:1,} returns sandbox id \"190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9\"" Sep 13 00:18:05.068994 sshd[4945]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:05.075664 systemd[1]: sshd@9-10.0.0.148:22-10.0.0.1:59396.service: Deactivated successfully. Sep 13 00:18:05.082151 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:18:05.083346 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:18:05.084971 systemd-logind[1446]: Removed session 10. Sep 13 00:18:05.366714 containerd[1462]: time="2025-09-13T00:18:05.366357973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:05.369460 containerd[1462]: time="2025-09-13T00:18:05.368096308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:18:05.370366 containerd[1462]: time="2025-09-13T00:18:05.370268048Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:05.375509 containerd[1462]: time="2025-09-13T00:18:05.375454258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:05.377241 containerd[1462]: time="2025-09-13T00:18:05.376733581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.285021958s" Sep 13 00:18:05.377241 containerd[1462]: time="2025-09-13T00:18:05.376767104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:18:05.385636 containerd[1462]: time="2025-09-13T00:18:05.384861072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:18:05.391144 containerd[1462]: time="2025-09-13T00:18:05.390708174Z" level=info msg="CreateContainer within sandbox \"e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:18:05.400603 kubelet[2558]: I0913 00:18:05.400374 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n4ctb" podStartSLOduration=49.400346612 podStartE2EDuration="49.400346612s" podCreationTimestamp="2025-09-13 00:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:18:04.821951304 +0000 UTC m=+54.665183534" watchObservedRunningTime="2025-09-13 00:18:05.400346612 +0000 UTC m=+55.243578842" Sep 13 00:18:05.415033 containerd[1462]: time="2025-09-13T00:18:05.414963061Z" level=info msg="CreateContainer within sandbox \"e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"40c1c577203a522e5d01e8f15a3d63a882973ccfe8b6cc8f85b4bf525fd24ea8\"" Sep 13 00:18:05.416154 containerd[1462]: time="2025-09-13T00:18:05.416116501Z" level=info msg="StartContainer for \"40c1c577203a522e5d01e8f15a3d63a882973ccfe8b6cc8f85b4bf525fd24ea8\"" Sep 13 00:18:05.454813 systemd[1]: Started cri-containerd-40c1c577203a522e5d01e8f15a3d63a882973ccfe8b6cc8f85b4bf525fd24ea8.scope - libcontainer container 40c1c577203a522e5d01e8f15a3d63a882973ccfe8b6cc8f85b4bf525fd24ea8. Sep 13 00:18:05.520630 containerd[1462]: time="2025-09-13T00:18:05.520533337Z" level=info msg="StartContainer for \"40c1c577203a522e5d01e8f15a3d63a882973ccfe8b6cc8f85b4bf525fd24ea8\" returns successfully" Sep 13 00:18:05.730408 kubelet[2558]: E0913 00:18:05.730207 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:05.730408 kubelet[2558]: E0913 00:18:05.730331 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:05.770137 containerd[1462]: time="2025-09-13T00:18:05.770019151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:18:05.773476 containerd[1462]: time="2025-09-13T00:18:05.773414295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 388.501684ms" Sep 13 00:18:05.773476 containerd[1462]: time="2025-09-13T00:18:05.773469521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:18:05.776680 containerd[1462]: time="2025-09-13T00:18:05.776623882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:18:05.782478 containerd[1462]: time="2025-09-13T00:18:05.782434765Z" level=info msg="CreateContainer within sandbox \"3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:18:05.786886 containerd[1462]: time="2025-09-13T00:18:05.786817739Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:05.802169 containerd[1462]: time="2025-09-13T00:18:05.802104207Z" level=info msg="CreateContainer within sandbox \"3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"81fc2c2dd10015c991d3af73ccfff6a8ffb2ad3403247aaaeb9d280c0fe68b25\"" Sep 13 00:18:05.803595 containerd[1462]: time="2025-09-13T00:18:05.802842358Z" level=info msg="StartContainer for \"81fc2c2dd10015c991d3af73ccfff6a8ffb2ad3403247aaaeb9d280c0fe68b25\"" Sep 13 00:18:05.808227 systemd-networkd[1396]: cali44906cd0334: Gained IPv6LL Sep 13 00:18:05.841787 systemd[1]: Started cri-containerd-81fc2c2dd10015c991d3af73ccfff6a8ffb2ad3403247aaaeb9d280c0fe68b25.scope - libcontainer container 81fc2c2dd10015c991d3af73ccfff6a8ffb2ad3403247aaaeb9d280c0fe68b25. Sep 13 00:18:05.871737 systemd-networkd[1396]: cali2bcece92da1: Gained IPv6LL Sep 13 00:18:05.892501 containerd[1462]: time="2025-09-13T00:18:05.892425617Z" level=info msg="StartContainer for \"81fc2c2dd10015c991d3af73ccfff6a8ffb2ad3403247aaaeb9d280c0fe68b25\" returns successfully" Sep 13 00:18:06.736669 kubelet[2558]: E0913 00:18:06.735168 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:06.737940 kubelet[2558]: E0913 00:18:06.737889 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:06.756059 kubelet[2558]: I0913 00:18:06.754217 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5567664f8d-nl824" podStartSLOduration=33.190187938 podStartE2EDuration="38.754198831s" podCreationTimestamp="2025-09-13 00:17:28 +0000 UTC" firstStartedPulling="2025-09-13 00:18:00.210469493 +0000 UTC m=+50.053701723" lastFinishedPulling="2025-09-13 00:18:05.774480386 +0000 UTC m=+55.617712616" observedRunningTime="2025-09-13 00:18:06.753183229 +0000 UTC m=+56.596415459" watchObservedRunningTime="2025-09-13 00:18:06.754198831 +0000 UTC m=+56.597431061" Sep 13 00:18:07.735561 kubelet[2558]: I0913 00:18:07.735508 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:18:07.736049 kubelet[2558]: E0913 00:18:07.735955 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:08.126820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4181075402.mount: Deactivated successfully. Sep 13 00:18:08.782265 containerd[1462]: time="2025-09-13T00:18:08.782191119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:08.783236 containerd[1462]: time="2025-09-13T00:18:08.783186310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:18:08.784654 containerd[1462]: time="2025-09-13T00:18:08.784611196Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:08.787963 containerd[1462]: time="2025-09-13T00:18:08.787918167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:08.788689 containerd[1462]: time="2025-09-13T00:18:08.788660784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.011990453s" Sep 13 00:18:08.788757 containerd[1462]: time="2025-09-13T00:18:08.788692415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:18:08.789816 containerd[1462]: time="2025-09-13T00:18:08.789772049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:18:08.797888 containerd[1462]: time="2025-09-13T00:18:08.797818503Z" level=info msg="CreateContainer within sandbox \"d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:18:08.815346 containerd[1462]: time="2025-09-13T00:18:08.815277717Z" level=info msg="CreateContainer within sandbox \"d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"940b793aafc94d8e4e630deececfb96c3424f66a34eede258c9ab5d8248292df\"" Sep 13 00:18:08.816163 containerd[1462]: time="2025-09-13T00:18:08.816130154Z" level=info msg="StartContainer for \"940b793aafc94d8e4e630deececfb96c3424f66a34eede258c9ab5d8248292df\"" Sep 13 00:18:08.854707 systemd[1]: Started cri-containerd-940b793aafc94d8e4e630deececfb96c3424f66a34eede258c9ab5d8248292df.scope - libcontainer container 940b793aafc94d8e4e630deececfb96c3424f66a34eede258c9ab5d8248292df. Sep 13 00:18:08.908317 containerd[1462]: time="2025-09-13T00:18:08.908250466Z" level=info msg="StartContainer for \"940b793aafc94d8e4e630deececfb96c3424f66a34eede258c9ab5d8248292df\" returns successfully" Sep 13 00:18:09.760821 kubelet[2558]: I0913 00:18:09.758270 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-dgk4p" podStartSLOduration=30.287376973 podStartE2EDuration="38.758249684s" podCreationTimestamp="2025-09-13 00:17:31 +0000 UTC" firstStartedPulling="2025-09-13 00:18:00.318776963 +0000 UTC m=+50.162009193" lastFinishedPulling="2025-09-13 00:18:08.789649643 +0000 UTC m=+58.632881904" observedRunningTime="2025-09-13 00:18:09.757970519 +0000 UTC m=+59.601202759" watchObservedRunningTime="2025-09-13 00:18:09.758249684 +0000 UTC m=+59.601481915" Sep 13 00:18:10.079707 systemd[1]: Started sshd@10-10.0.0.148:22-10.0.0.1:47004.service - OpenSSH per-connection server daemon (10.0.0.1:47004). Sep 13 00:18:10.252848 containerd[1462]: time="2025-09-13T00:18:10.252799385Z" level=info msg="StopPodSandbox for \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\"" Sep 13 00:18:10.266307 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 47004 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:10.268195 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:10.273938 systemd-logind[1446]: New session 11 of user core. Sep 13 00:18:10.280884 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.298 [WARNING][5252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0", GenerateName:"calico-kube-controllers-5d8667ffb7-", Namespace:"calico-system", SelfLink:"", UID:"4bc675b9-f8ab-4324-abb1-fe64dccc6391", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8667ffb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9", Pod:"calico-kube-controllers-5d8667ffb7-b5pnn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bcece92da1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.298 [INFO][5252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.298 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" iface="eth0" netns="" Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.298 [INFO][5252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.298 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.320 [INFO][5263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.320 [INFO][5263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.320 [INFO][5263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.366 [WARNING][5263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.366 [INFO][5263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.367 [INFO][5263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:10.375341 containerd[1462]: 2025-09-13 00:18:10.371 [INFO][5252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.375341 containerd[1462]: time="2025-09-13T00:18:10.375308439Z" level=info msg="TearDown network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\" successfully" Sep 13 00:18:10.375341 containerd[1462]: time="2025-09-13T00:18:10.375334559Z" level=info msg="StopPodSandbox for \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\" returns successfully" Sep 13 00:18:10.376349 containerd[1462]: time="2025-09-13T00:18:10.376291073Z" level=info msg="RemovePodSandbox for \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\"" Sep 13 00:18:10.379249 containerd[1462]: time="2025-09-13T00:18:10.379213209Z" level=info msg="Forcibly stopping sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\"" Sep 13 00:18:10.442415 sshd[5240]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:10.449002 systemd[1]: sshd@10-10.0.0.148:22-10.0.0.1:47004.service: Deactivated successfully. Sep 13 00:18:10.451780 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:18:10.452612 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:18:10.454070 systemd-logind[1446]: Removed session 11. Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.424 [WARNING][5290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0", GenerateName:"calico-kube-controllers-5d8667ffb7-", Namespace:"calico-system", SelfLink:"", UID:"4bc675b9-f8ab-4324-abb1-fe64dccc6391", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8667ffb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9", Pod:"calico-kube-controllers-5d8667ffb7-b5pnn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bcece92da1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.424 [INFO][5290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.424 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" iface="eth0" netns="" Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.425 [INFO][5290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.425 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.453 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.454 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.454 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.462 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.462 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" HandleID="k8s-pod-network.312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Workload="localhost-k8s-calico--kube--controllers--5d8667ffb7--b5pnn-eth0" Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.466 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:10.474239 containerd[1462]: 2025-09-13 00:18:10.470 [INFO][5290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50" Sep 13 00:18:10.507901 containerd[1462]: time="2025-09-13T00:18:10.474269870Z" level=info msg="TearDown network for sandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\" successfully" Sep 13 00:18:10.741064 containerd[1462]: time="2025-09-13T00:18:10.740728837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:10.741064 containerd[1462]: time="2025-09-13T00:18:10.740819371Z" level=info msg="RemovePodSandbox \"312a8be6d2ff2feb1a000d061ac051d6e40dcd228adbd0a892ac7e8899561e50\" returns successfully" Sep 13 00:18:10.741469 containerd[1462]: time="2025-09-13T00:18:10.741442556Z" level=info msg="StopPodSandbox for \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\"" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.782 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6748f3df-f396-46d5-b632-65b0d3fe85e1", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e", Pod:"calico-apiserver-5567664f8d-svmbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18e193246c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.783 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.783 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" iface="eth0" netns="" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.783 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.783 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.808 [INFO][5345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.809 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.809 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.816 [WARNING][5345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.816 [INFO][5345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.818 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:10.825073 containerd[1462]: 2025-09-13 00:18:10.821 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.825587 containerd[1462]: time="2025-09-13T00:18:10.825122943Z" level=info msg="TearDown network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\" successfully" Sep 13 00:18:10.825587 containerd[1462]: time="2025-09-13T00:18:10.825166657Z" level=info msg="StopPodSandbox for \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\" returns successfully" Sep 13 00:18:10.825798 containerd[1462]: time="2025-09-13T00:18:10.825764574Z" level=info msg="RemovePodSandbox for \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\"" Sep 13 00:18:10.825831 containerd[1462]: time="2025-09-13T00:18:10.825808699Z" level=info msg="Forcibly stopping sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\"" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.868 [WARNING][5369] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6748f3df-f396-46d5-b632-65b0d3fe85e1", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68a001b4b44c3368963fe2c871a6c185a8cfebc38d57645feaf428d55d04869e", Pod:"calico-apiserver-5567664f8d-svmbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18e193246c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.869 [INFO][5369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.869 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" iface="eth0" netns="" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.869 [INFO][5369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.869 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.902 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.902 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.902 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.909 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.909 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" HandleID="k8s-pod-network.70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Workload="localhost-k8s-calico--apiserver--5567664f8d--svmbp-eth0" Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.911 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:10.918459 containerd[1462]: 2025-09-13 00:18:10.914 [INFO][5369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd" Sep 13 00:18:10.919021 containerd[1462]: time="2025-09-13T00:18:10.918511163Z" level=info msg="TearDown network for sandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\" successfully" Sep 13 00:18:10.933057 containerd[1462]: time="2025-09-13T00:18:10.932990119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:10.933214 containerd[1462]: time="2025-09-13T00:18:10.933081143Z" level=info msg="RemovePodSandbox \"70e553ce95590bc6b25e402c181274b36bc3e47ca1dff4feb7f4cd0786502bcd\" returns successfully" Sep 13 00:18:10.933757 containerd[1462]: time="2025-09-13T00:18:10.933726160Z" level=info msg="StopPodSandbox for \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\"" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:10.981 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bg67g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"631d57a2-dd3c-4c24-8d55-9feb2884e566", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022", Pod:"coredns-674b8bbfcf-bg67g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ac9f37a677", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:10.982 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:10.982 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" iface="eth0" netns="" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:10.982 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:10.982 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:11.016 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:11.017 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:11.017 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:11.024 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:11.024 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:11.025 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:11.040280 containerd[1462]: 2025-09-13 00:18:11.036 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.040954 containerd[1462]: time="2025-09-13T00:18:11.040338394Z" level=info msg="TearDown network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\" successfully" Sep 13 00:18:11.040954 containerd[1462]: time="2025-09-13T00:18:11.040373982Z" level=info msg="StopPodSandbox for \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\" returns successfully" Sep 13 00:18:11.041540 containerd[1462]: time="2025-09-13T00:18:11.041404989Z" level=info msg="RemovePodSandbox for \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\"" Sep 13 00:18:11.041540 containerd[1462]: time="2025-09-13T00:18:11.041442661Z" level=info msg="Forcibly stopping sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\"" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.086 [WARNING][5425] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bg67g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"631d57a2-dd3c-4c24-8d55-9feb2884e566", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"517c5dd016d8eff6641a1f5e626b7c4e862dcd97d13c51fafd5da062dc21c022", Pod:"coredns-674b8bbfcf-bg67g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ac9f37a677", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.087 [INFO][5425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.087 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" iface="eth0" netns="" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.087 [INFO][5425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.087 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.118 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.118 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.118 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.126 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.126 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" HandleID="k8s-pod-network.9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Workload="localhost-k8s-coredns--674b8bbfcf--bg67g-eth0" Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.128 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:11.135531 containerd[1462]: 2025-09-13 00:18:11.132 [INFO][5425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce" Sep 13 00:18:11.136267 containerd[1462]: time="2025-09-13T00:18:11.135606004Z" level=info msg="TearDown network for sandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\" successfully" Sep 13 00:18:11.440892 containerd[1462]: time="2025-09-13T00:18:11.440698904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:11.440892 containerd[1462]: time="2025-09-13T00:18:11.440811670Z" level=info msg="RemovePodSandbox \"9bd49462806c9f465f44b0781c685b20d35796e83a37cc7de607087ced6493ce\" returns successfully" Sep 13 00:18:11.442467 containerd[1462]: time="2025-09-13T00:18:11.442084611Z" level=info msg="StopPodSandbox for \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\"" Sep 13 00:18:11.448119 containerd[1462]: time="2025-09-13T00:18:11.447942351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:11.449879 containerd[1462]: time="2025-09-13T00:18:11.449807588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:18:11.452821 containerd[1462]: time="2025-09-13T00:18:11.452768204Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:11.455414 containerd[1462]: time="2025-09-13T00:18:11.455342910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:11.456623 containerd[1462]: time="2025-09-13T00:18:11.456590011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.666783105s" Sep 13 00:18:11.456712 containerd[1462]: time="2025-09-13T00:18:11.456629016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:18:11.458224 containerd[1462]: time="2025-09-13T00:18:11.457940471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:18:11.463337 containerd[1462]: time="2025-09-13T00:18:11.463285157Z" level=info msg="CreateContainer within sandbox \"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:18:11.479907 containerd[1462]: time="2025-09-13T00:18:11.479730447Z" level=info msg="CreateContainer within sandbox \"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e515eb8700fc7067be9eba602edb6caad6548d3dae177517ca453121a5584cf7\"" Sep 13 00:18:11.480647 containerd[1462]: time="2025-09-13T00:18:11.480570437Z" level=info msg="StartContainer for \"e515eb8700fc7067be9eba602edb6caad6548d3dae177517ca453121a5584cf7\"" Sep 13 00:18:11.531852 systemd[1]: Started cri-containerd-e515eb8700fc7067be9eba602edb6caad6548d3dae177517ca453121a5584cf7.scope - libcontainer container e515eb8700fc7067be9eba602edb6caad6548d3dae177517ca453121a5584cf7. Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.491 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cnh9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0bee5869-7316-4315-890e-b413da2035a5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf", Pod:"csi-node-driver-cnh9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44906cd0334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.492 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.492 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" iface="eth0" netns="" Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.492 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.492 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.528 [INFO][5465] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.528 [INFO][5465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.528 [INFO][5465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.536 [WARNING][5465] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.537 [INFO][5465] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.539 [INFO][5465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:11.548981 containerd[1462]: 2025-09-13 00:18:11.545 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.549991 containerd[1462]: time="2025-09-13T00:18:11.549016083Z" level=info msg="TearDown network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\" successfully" Sep 13 00:18:11.549991 containerd[1462]: time="2025-09-13T00:18:11.549046761Z" level=info msg="StopPodSandbox for \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\" returns successfully" Sep 13 00:18:11.550623 containerd[1462]: time="2025-09-13T00:18:11.550584229Z" level=info msg="RemovePodSandbox for \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\"" Sep 13 00:18:11.550674 containerd[1462]: time="2025-09-13T00:18:11.550634786Z" level=info msg="Forcibly stopping sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\"" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.596 [WARNING][5503] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cnh9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0bee5869-7316-4315-890e-b413da2035a5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf", Pod:"csi-node-driver-cnh9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44906cd0334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.597 [INFO][5503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.597 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" iface="eth0" netns="" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.597 [INFO][5503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.597 [INFO][5503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.626 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.626 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.626 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.635 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.635 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" HandleID="k8s-pod-network.5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Workload="localhost-k8s-csi--node--driver--cnh9g-eth0" Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.637 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:11.643493 containerd[1462]: 2025-09-13 00:18:11.640 [INFO][5503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64" Sep 13 00:18:11.644148 containerd[1462]: time="2025-09-13T00:18:11.644103316Z" level=info msg="TearDown network for sandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\" successfully" Sep 13 00:18:11.815880 containerd[1462]: time="2025-09-13T00:18:11.815654438Z" level=info msg="StartContainer for \"e515eb8700fc7067be9eba602edb6caad6548d3dae177517ca453121a5584cf7\" returns successfully" Sep 13 00:18:11.822476 containerd[1462]: time="2025-09-13T00:18:11.821617330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:11.822476 containerd[1462]: time="2025-09-13T00:18:11.821683577Z" level=info msg="RemovePodSandbox \"5b63c5ec809353cb9d3a63039a42e2ff281ce17b8a25fb6abdf6f38093b7fe64\" returns successfully" Sep 13 00:18:11.822476 containerd[1462]: time="2025-09-13T00:18:11.822375053Z" level=info msg="StopPodSandbox for \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\"" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.862 [WARNING][5541] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" WorkloadEndpoint="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.862 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.862 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" iface="eth0" netns="" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.862 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.862 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.886 [INFO][5550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.886 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.886 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.893 [WARNING][5550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.893 [INFO][5550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.894 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:11.900727 containerd[1462]: 2025-09-13 00:18:11.897 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.901133 containerd[1462]: time="2025-09-13T00:18:11.900780002Z" level=info msg="TearDown network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\" successfully" Sep 13 00:18:11.901133 containerd[1462]: time="2025-09-13T00:18:11.900810271Z" level=info msg="StopPodSandbox for \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\" returns successfully" Sep 13 00:18:11.901476 containerd[1462]: time="2025-09-13T00:18:11.901437753Z" level=info msg="RemovePodSandbox for \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\"" Sep 13 00:18:11.901535 containerd[1462]: time="2025-09-13T00:18:11.901482038Z" level=info msg="Forcibly stopping sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\"" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.941 [WARNING][5569] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" WorkloadEndpoint="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.941 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.941 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" iface="eth0" netns="" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.941 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.941 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.967 [INFO][5578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.967 [INFO][5578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.968 [INFO][5578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.973 [WARNING][5578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.973 [INFO][5578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" HandleID="k8s-pod-network.b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Workload="localhost-k8s-whisker--59c5768889--w4vfq-eth0" Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.974 [INFO][5578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:11.981111 containerd[1462]: 2025-09-13 00:18:11.977 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2" Sep 13 00:18:11.981629 containerd[1462]: time="2025-09-13T00:18:11.981143978Z" level=info msg="TearDown network for sandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\" successfully" Sep 13 00:18:11.985835 containerd[1462]: time="2025-09-13T00:18:11.985782782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:11.985896 containerd[1462]: time="2025-09-13T00:18:11.985861713Z" level=info msg="RemovePodSandbox \"b78ddafaa1e7a6dbe3bfbcd0260576019bc3ab89d8b14075aaacb0c66241a3d2\" returns successfully" Sep 13 00:18:11.986453 containerd[1462]: time="2025-09-13T00:18:11.986402540Z" level=info msg="StopPodSandbox for \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\"" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.021 [WARNING][5596] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9994ca67-7eed-4733-95f7-6dbed4d7c37b", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee", Pod:"coredns-674b8bbfcf-n4ctb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif93d59d5539", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.021 [INFO][5596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.021 [INFO][5596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" iface="eth0" netns="" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.021 [INFO][5596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.021 [INFO][5596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.044 [INFO][5605] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.044 [INFO][5605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.044 [INFO][5605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.050 [WARNING][5605] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.050 [INFO][5605] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.052 [INFO][5605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:12.057814 containerd[1462]: 2025-09-13 00:18:12.054 [INFO][5596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.058444 containerd[1462]: time="2025-09-13T00:18:12.057877221Z" level=info msg="TearDown network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\" successfully" Sep 13 00:18:12.058444 containerd[1462]: time="2025-09-13T00:18:12.057912568Z" level=info msg="StopPodSandbox for \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\" returns successfully" Sep 13 00:18:12.058647 containerd[1462]: time="2025-09-13T00:18:12.058609003Z" level=info msg="RemovePodSandbox for \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\"" Sep 13 00:18:12.058714 containerd[1462]: time="2025-09-13T00:18:12.058669649Z" level=info msg="Forcibly stopping sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\"" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.096 [WARNING][5622] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9994ca67-7eed-4733-95f7-6dbed4d7c37b", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02357dd3370ce6fcc3517c519f91c33bfc6f3ec4a942f7026cb8ad1846d1f0ee", Pod:"coredns-674b8bbfcf-n4ctb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif93d59d5539", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.097 [INFO][5622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.097 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" iface="eth0" netns="" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.097 [INFO][5622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.097 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.120 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.120 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.120 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.126 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.126 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" HandleID="k8s-pod-network.aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Workload="localhost-k8s-coredns--674b8bbfcf--n4ctb-eth0" Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.128 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:12.134753 containerd[1462]: 2025-09-13 00:18:12.131 [INFO][5622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5" Sep 13 00:18:12.134753 containerd[1462]: time="2025-09-13T00:18:12.134714412Z" level=info msg="TearDown network for sandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\" successfully" Sep 13 00:18:12.139669 containerd[1462]: time="2025-09-13T00:18:12.139620863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:12.139749 containerd[1462]: time="2025-09-13T00:18:12.139679376Z" level=info msg="RemovePodSandbox \"aeb9fe897dcb5e3b38d396082b12353ed9b9177787a389cc652e4c9bc95ac7f5\" returns successfully" Sep 13 00:18:12.140267 containerd[1462]: time="2025-09-13T00:18:12.140238988Z" level=info msg="StopPodSandbox for \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\"" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.177 [WARNING][5649] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--dgk4p-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1f424473-224d-4e51-9ca9-c442b0bc325d", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9", Pod:"goldmane-54d579b49d-dgk4p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83dbf31f82f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.177 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.177 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" iface="eth0" netns="" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.177 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.177 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.200 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.200 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.200 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.206 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.207 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.209 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:12.216250 containerd[1462]: 2025-09-13 00:18:12.212 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.217243 containerd[1462]: time="2025-09-13T00:18:12.216298559Z" level=info msg="TearDown network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\" successfully" Sep 13 00:18:12.217243 containerd[1462]: time="2025-09-13T00:18:12.216329959Z" level=info msg="StopPodSandbox for \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\" returns successfully" Sep 13 00:18:12.217243 containerd[1462]: time="2025-09-13T00:18:12.216962201Z" level=info msg="RemovePodSandbox for \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\"" Sep 13 00:18:12.217243 containerd[1462]: time="2025-09-13T00:18:12.217013349Z" level=info msg="Forcibly stopping sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\"" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.259 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--dgk4p-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1f424473-224d-4e51-9ca9-c442b0bc325d", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d82ca7c76991c5d428478a4924e4d3c153bdf49a6c15a2b9e1e24de3a2858cf9", Pod:"goldmane-54d579b49d-dgk4p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83dbf31f82f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.259 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.259 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" iface="eth0" netns="" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.259 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.259 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.289 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.289 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.289 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.295 [WARNING][5683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.295 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" HandleID="k8s-pod-network.d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Workload="localhost-k8s-goldmane--54d579b49d--dgk4p-eth0" Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.297 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:12.303757 containerd[1462]: 2025-09-13 00:18:12.300 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584" Sep 13 00:18:12.304416 containerd[1462]: time="2025-09-13T00:18:12.304366150Z" level=info msg="TearDown network for sandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\" successfully" Sep 13 00:18:12.309177 containerd[1462]: time="2025-09-13T00:18:12.309132183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:12.309247 containerd[1462]: time="2025-09-13T00:18:12.309209741Z" level=info msg="RemovePodSandbox \"d4e9e386830a0a715c690ef705b9b2c96242164e6dee5985cd57f1571f8f9584\" returns successfully" Sep 13 00:18:12.309724 containerd[1462]: time="2025-09-13T00:18:12.309700081Z" level=info msg="StopPodSandbox for \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\"" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.347 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"42dd6ade-572a-4087-84d6-79c32851c332", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded", Pod:"calico-apiserver-5567664f8d-nl824", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cb48d2a36a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.347 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.347 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" iface="eth0" netns="" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.347 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.347 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.371 [INFO][5708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.371 [INFO][5708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.371 [INFO][5708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.378 [WARNING][5708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.378 [INFO][5708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.380 [INFO][5708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:12.387873 containerd[1462]: 2025-09-13 00:18:12.384 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.388427 containerd[1462]: time="2025-09-13T00:18:12.387853073Z" level=info msg="TearDown network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\" successfully" Sep 13 00:18:12.388427 containerd[1462]: time="2025-09-13T00:18:12.387895113Z" level=info msg="StopPodSandbox for \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\" returns successfully" Sep 13 00:18:12.388608 containerd[1462]: time="2025-09-13T00:18:12.388576809Z" level=info msg="RemovePodSandbox for \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\"" Sep 13 00:18:12.388701 containerd[1462]: time="2025-09-13T00:18:12.388615103Z" level=info msg="Forcibly stopping sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\"" Sep 13 00:18:12.412738 kubelet[2558]: I0913 00:18:12.412217 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.426 [WARNING][5726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0", GenerateName:"calico-apiserver-5567664f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"42dd6ade-572a-4087-84d6-79c32851c332", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567664f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ba15cca60b6eca4a74504f100a11ea18c46b8377c3d91127bbdd2dddd221ded", Pod:"calico-apiserver-5567664f8d-nl824", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cb48d2a36a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.427 [INFO][5726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.427 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" iface="eth0" netns="" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.427 [INFO][5726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.427 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.464 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.465 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.465 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.470 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.470 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" HandleID="k8s-pod-network.796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Workload="localhost-k8s-calico--apiserver--5567664f8d--nl824-eth0" Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.471 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:12.478262 containerd[1462]: 2025-09-13 00:18:12.474 [INFO][5726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae" Sep 13 00:18:12.479005 containerd[1462]: time="2025-09-13T00:18:12.478299301Z" level=info msg="TearDown network for sandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\" successfully" Sep 13 00:18:12.483103 containerd[1462]: time="2025-09-13T00:18:12.483037700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:18:12.483189 containerd[1462]: time="2025-09-13T00:18:12.483163852Z" level=info msg="RemovePodSandbox \"796a690f9b1ece7bdfa246dae58206c58b4e7d36939328c629eb73116d589dae\" returns successfully" Sep 13 00:18:13.811671 containerd[1462]: time="2025-09-13T00:18:13.811532614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:13.814392 containerd[1462]: time="2025-09-13T00:18:13.814299895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:18:13.816989 containerd[1462]: time="2025-09-13T00:18:13.816939019Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:13.819853 containerd[1462]: time="2025-09-13T00:18:13.819810227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:13.820431 containerd[1462]: time="2025-09-13T00:18:13.820388565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.362407276s" Sep 13 00:18:13.820431 containerd[1462]: time="2025-09-13T00:18:13.820419213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:18:13.846599 containerd[1462]: time="2025-09-13T00:18:13.846529718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:18:13.865022 containerd[1462]: time="2025-09-13T00:18:13.864955164Z" level=info msg="CreateContainer within sandbox \"190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:18:13.914272 containerd[1462]: time="2025-09-13T00:18:13.914204873Z" level=info msg="CreateContainer within sandbox \"190f0488090130e47412c8111f25434989b2f0e32444a2a9919a7efc28b6b3d9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a91cb522de5a5de677a50ccbedb44d245aa96daa4bd6d1a29e980a908e5220ac\"" Sep 13 00:18:13.916053 containerd[1462]: time="2025-09-13T00:18:13.915991063Z" level=info msg="StartContainer for \"a91cb522de5a5de677a50ccbedb44d245aa96daa4bd6d1a29e980a908e5220ac\"" Sep 13 00:18:13.975701 systemd[1]: Started cri-containerd-a91cb522de5a5de677a50ccbedb44d245aa96daa4bd6d1a29e980a908e5220ac.scope - libcontainer container a91cb522de5a5de677a50ccbedb44d245aa96daa4bd6d1a29e980a908e5220ac. Sep 13 00:18:14.076979 containerd[1462]: time="2025-09-13T00:18:14.076828827Z" level=info msg="StartContainer for \"a91cb522de5a5de677a50ccbedb44d245aa96daa4bd6d1a29e980a908e5220ac\" returns successfully" Sep 13 00:18:15.016052 kubelet[2558]: I0913 00:18:15.015975 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d8667ffb7-b5pnn" podStartSLOduration=34.194228932 podStartE2EDuration="43.015951818s" podCreationTimestamp="2025-09-13 00:17:32 +0000 UTC" firstStartedPulling="2025-09-13 00:18:05.024616447 +0000 UTC m=+54.867848677" lastFinishedPulling="2025-09-13 00:18:13.846339333 +0000 UTC m=+63.689571563" observedRunningTime="2025-09-13 00:18:14.904389202 +0000 UTC m=+64.747621432" watchObservedRunningTime="2025-09-13 00:18:15.015951818 +0000 UTC m=+64.859184048" Sep 13 00:18:15.455240 systemd[1]: Started sshd@11-10.0.0.148:22-10.0.0.1:47008.service - OpenSSH per-connection server daemon (10.0.0.1:47008). Sep 13 00:18:15.529743 sshd[5845]: Accepted publickey for core from 10.0.0.1 port 47008 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:15.531999 sshd[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:15.537513 systemd-logind[1446]: New session 12 of user core. Sep 13 00:18:15.545729 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:18:15.801452 sshd[5845]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:15.805929 systemd[1]: sshd@11-10.0.0.148:22-10.0.0.1:47008.service: Deactivated successfully. Sep 13 00:18:15.808694 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:18:15.809433 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:18:15.810613 systemd-logind[1446]: Removed session 12. Sep 13 00:18:17.532041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037325657.mount: Deactivated successfully. Sep 13 00:18:18.290821 containerd[1462]: time="2025-09-13T00:18:18.290694532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:18.294235 containerd[1462]: time="2025-09-13T00:18:18.294171915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:18:18.297284 containerd[1462]: time="2025-09-13T00:18:18.297242501Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:18.310077 containerd[1462]: time="2025-09-13T00:18:18.309981376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:18.311229 containerd[1462]: time="2025-09-13T00:18:18.311183443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.464600283s" Sep 13 00:18:18.311298 containerd[1462]: time="2025-09-13T00:18:18.311231194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:18:18.313250 containerd[1462]: time="2025-09-13T00:18:18.313188644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:18:18.323896 containerd[1462]: time="2025-09-13T00:18:18.323849690Z" level=info msg="CreateContainer within sandbox \"e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:18:18.347519 containerd[1462]: time="2025-09-13T00:18:18.347457189Z" level=info msg="CreateContainer within sandbox \"e92e0ca6450a0133fed0c60731fc36948c315b5720e2a06fc589a7ef394efd82\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c92c43609695e0bed4de600ae392d42193eb5c94699cbfc0c2c80e3b35504425\"" Sep 13 00:18:18.348186 containerd[1462]: time="2025-09-13T00:18:18.348121026Z" level=info msg="StartContainer for \"c92c43609695e0bed4de600ae392d42193eb5c94699cbfc0c2c80e3b35504425\"" Sep 13 00:18:18.392891 systemd[1]: Started cri-containerd-c92c43609695e0bed4de600ae392d42193eb5c94699cbfc0c2c80e3b35504425.scope - libcontainer container c92c43609695e0bed4de600ae392d42193eb5c94699cbfc0c2c80e3b35504425. Sep 13 00:18:18.465124 containerd[1462]: time="2025-09-13T00:18:18.464971513Z" level=info msg="StartContainer for \"c92c43609695e0bed4de600ae392d42193eb5c94699cbfc0c2c80e3b35504425\" returns successfully" Sep 13 00:18:19.839581 containerd[1462]: time="2025-09-13T00:18:19.839444297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:19.841067 containerd[1462]: time="2025-09-13T00:18:19.841028804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:18:19.842631 containerd[1462]: time="2025-09-13T00:18:19.842604061Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:19.844878 containerd[1462]: time="2025-09-13T00:18:19.844841313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:19.845453 containerd[1462]: time="2025-09-13T00:18:19.845423665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.532188392s" Sep 13 00:18:19.845497 containerd[1462]: time="2025-09-13T00:18:19.845451018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:18:19.851043 containerd[1462]: time="2025-09-13T00:18:19.851014701Z" level=info msg="CreateContainer within sandbox \"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:18:20.013174 containerd[1462]: time="2025-09-13T00:18:20.013090483Z" level=info msg="CreateContainer within sandbox \"ac23cdaae12c91273dd779fe9c7996523b37366f7faa45b7c7a5bf7bf0e2cddf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e8c85f9580f38373adbf7e5e22ba6ea3837745261c263d9c557cb2deffb995a\"" Sep 13 00:18:20.013796 containerd[1462]: time="2025-09-13T00:18:20.013768657Z" level=info msg="StartContainer for \"8e8c85f9580f38373adbf7e5e22ba6ea3837745261c263d9c557cb2deffb995a\"" Sep 13 00:18:20.044173 systemd[1]: run-containerd-runc-k8s.io-8e8c85f9580f38373adbf7e5e22ba6ea3837745261c263d9c557cb2deffb995a-runc.yBBjcp.mount: Deactivated successfully. Sep 13 00:18:20.061788 systemd[1]: Started cri-containerd-8e8c85f9580f38373adbf7e5e22ba6ea3837745261c263d9c557cb2deffb995a.scope - libcontainer container 8e8c85f9580f38373adbf7e5e22ba6ea3837745261c263d9c557cb2deffb995a. Sep 13 00:18:20.095876 containerd[1462]: time="2025-09-13T00:18:20.095737387Z" level=info msg="StartContainer for \"8e8c85f9580f38373adbf7e5e22ba6ea3837745261c263d9c557cb2deffb995a\" returns successfully" Sep 13 00:18:20.686831 kubelet[2558]: I0913 00:18:20.686776 2558 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:18:20.688122 kubelet[2558]: I0913 00:18:20.688097 2558 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:18:20.815462 systemd[1]: Started sshd@12-10.0.0.148:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Sep 13 00:18:20.877458 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:20.879689 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:20.884712 systemd-logind[1446]: New session 13 of user core. Sep 13 00:18:20.892705 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:18:20.976385 kubelet[2558]: I0913 00:18:20.975861 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-84ddb6c6dd-fp5jn" podStartSLOduration=4.026632388 podStartE2EDuration="22.97583421s" podCreationTimestamp="2025-09-13 00:17:58 +0000 UTC" firstStartedPulling="2025-09-13 00:17:59.363834961 +0000 UTC m=+49.207067191" lastFinishedPulling="2025-09-13 00:18:18.313036773 +0000 UTC m=+68.156269013" observedRunningTime="2025-09-13 00:18:18.876306768 +0000 UTC m=+68.719538998" watchObservedRunningTime="2025-09-13 00:18:20.97583421 +0000 UTC m=+70.819066460" Sep 13 00:18:20.976385 kubelet[2558]: I0913 00:18:20.976030 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cnh9g" podStartSLOduration=34.111742845 podStartE2EDuration="48.976025515s" podCreationTimestamp="2025-09-13 00:17:32 +0000 UTC" firstStartedPulling="2025-09-13 00:18:04.98189761 +0000 UTC m=+54.825129840" lastFinishedPulling="2025-09-13 00:18:19.84618028 +0000 UTC m=+69.689412510" observedRunningTime="2025-09-13 00:18:20.97569289 +0000 UTC m=+70.818925120" watchObservedRunningTime="2025-09-13 00:18:20.976025515 +0000 UTC m=+70.819257745" Sep 13 00:18:21.153582 sshd[5958]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:21.161854 systemd[1]: sshd@12-10.0.0.148:22-10.0.0.1:51544.service: Deactivated successfully. Sep 13 00:18:21.164200 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:18:21.165768 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:18:21.175491 systemd[1]: Started sshd@13-10.0.0.148:22-10.0.0.1:51550.service - OpenSSH per-connection server daemon (10.0.0.1:51550). Sep 13 00:18:21.176699 systemd-logind[1446]: Removed session 13. Sep 13 00:18:21.210803 sshd[5973]: Accepted publickey for core from 10.0.0.1 port 51550 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:21.212561 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:21.217402 systemd-logind[1446]: New session 14 of user core. Sep 13 00:18:21.222705 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:18:21.612236 sshd[5973]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:21.620924 systemd[1]: sshd@13-10.0.0.148:22-10.0.0.1:51550.service: Deactivated successfully. Sep 13 00:18:21.623130 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:18:21.624915 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:18:21.635837 systemd[1]: Started sshd@14-10.0.0.148:22-10.0.0.1:51552.service - OpenSSH per-connection server daemon (10.0.0.1:51552). Sep 13 00:18:21.636975 systemd-logind[1446]: Removed session 14. Sep 13 00:18:21.672350 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 51552 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:21.674350 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:21.679808 systemd-logind[1446]: New session 15 of user core. Sep 13 00:18:21.693720 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:18:21.948853 sshd[5986]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:21.954294 systemd[1]: sshd@14-10.0.0.148:22-10.0.0.1:51552.service: Deactivated successfully. Sep 13 00:18:21.956682 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:18:21.957751 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:18:21.958847 systemd-logind[1446]: Removed session 15. Sep 13 00:18:24.269147 kubelet[2558]: E0913 00:18:24.269099 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:25.269576 kubelet[2558]: E0913 00:18:25.269495 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:26.964973 systemd[1]: Started sshd@15-10.0.0.148:22-10.0.0.1:51556.service - OpenSSH per-connection server daemon (10.0.0.1:51556). Sep 13 00:18:27.007127 sshd[6003]: Accepted publickey for core from 10.0.0.1 port 51556 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:27.009318 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:27.013949 systemd-logind[1446]: New session 16 of user core. Sep 13 00:18:27.022735 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:18:27.142795 sshd[6003]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:27.147531 systemd[1]: sshd@15-10.0.0.148:22-10.0.0.1:51556.service: Deactivated successfully. Sep 13 00:18:27.149996 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:18:27.150891 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:18:27.152562 systemd-logind[1446]: Removed session 16. Sep 13 00:18:28.269271 kubelet[2558]: E0913 00:18:28.269155 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:32.160919 systemd[1]: Started sshd@16-10.0.0.148:22-10.0.0.1:35990.service - OpenSSH per-connection server daemon (10.0.0.1:35990). Sep 13 00:18:32.201167 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 35990 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:32.203072 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:32.207381 systemd-logind[1446]: New session 17 of user core. Sep 13 00:18:32.219717 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:18:32.358621 sshd[6045]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:32.365006 systemd[1]: sshd@16-10.0.0.148:22-10.0.0.1:35990.service: Deactivated successfully. Sep 13 00:18:32.367655 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:18:32.368759 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:18:32.370032 systemd-logind[1446]: Removed session 17. Sep 13 00:18:37.373630 systemd[1]: Started sshd@17-10.0.0.148:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). Sep 13 00:18:37.432534 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:37.434576 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:37.439129 systemd-logind[1446]: New session 18 of user core. Sep 13 00:18:37.451662 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:18:37.613838 sshd[6060]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:37.618872 systemd[1]: sshd@17-10.0.0.148:22-10.0.0.1:35998.service: Deactivated successfully. Sep 13 00:18:37.621317 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:18:37.622194 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:18:37.623141 systemd-logind[1446]: Removed session 18. Sep 13 00:18:42.626683 systemd[1]: Started sshd@18-10.0.0.148:22-10.0.0.1:35586.service - OpenSSH per-connection server daemon (10.0.0.1:35586). Sep 13 00:18:42.672307 sshd[6108]: Accepted publickey for core from 10.0.0.1 port 35586 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:42.674033 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:42.678800 systemd-logind[1446]: New session 19 of user core. Sep 13 00:18:42.690771 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:18:42.884968 sshd[6108]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:42.899522 systemd[1]: sshd@18-10.0.0.148:22-10.0.0.1:35586.service: Deactivated successfully. Sep 13 00:18:42.902320 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:18:42.907032 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:18:42.916065 systemd[1]: Started sshd@19-10.0.0.148:22-10.0.0.1:35592.service - OpenSSH per-connection server daemon (10.0.0.1:35592). Sep 13 00:18:42.917152 systemd-logind[1446]: Removed session 19. Sep 13 00:18:42.954685 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 35592 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:42.956591 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:42.961647 systemd-logind[1446]: New session 20 of user core. Sep 13 00:18:42.968733 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:18:43.268697 kubelet[2558]: E0913 00:18:43.268649 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:43.326230 sshd[6122]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:43.333330 systemd[1]: sshd@19-10.0.0.148:22-10.0.0.1:35592.service: Deactivated successfully. Sep 13 00:18:43.335231 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:18:43.336933 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:18:43.338240 systemd[1]: Started sshd@20-10.0.0.148:22-10.0.0.1:35594.service - OpenSSH per-connection server daemon (10.0.0.1:35594). Sep 13 00:18:43.339186 systemd-logind[1446]: Removed session 20. Sep 13 00:18:43.392511 sshd[6135]: Accepted publickey for core from 10.0.0.1 port 35594 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:43.394446 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:43.398959 systemd-logind[1446]: New session 21 of user core. Sep 13 00:18:43.408684 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:18:43.962287 sshd[6135]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:43.973717 systemd[1]: sshd@20-10.0.0.148:22-10.0.0.1:35594.service: Deactivated successfully. Sep 13 00:18:43.977194 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:18:43.979627 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:18:43.989198 systemd[1]: Started sshd@21-10.0.0.148:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). Sep 13 00:18:43.989887 systemd-logind[1446]: Removed session 21. Sep 13 00:18:44.036759 sshd[6156]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:44.038730 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:44.043217 systemd-logind[1446]: New session 22 of user core. Sep 13 00:18:44.053726 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:18:44.432533 sshd[6156]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:44.445630 systemd[1]: sshd@21-10.0.0.148:22-10.0.0.1:35604.service: Deactivated successfully. Sep 13 00:18:44.448520 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:18:44.454278 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:18:44.466338 systemd[1]: Started sshd@22-10.0.0.148:22-10.0.0.1:35608.service - OpenSSH per-connection server daemon (10.0.0.1:35608). Sep 13 00:18:44.468180 systemd-logind[1446]: Removed session 22. Sep 13 00:18:44.507909 sshd[6168]: Accepted publickey for core from 10.0.0.1 port 35608 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:44.509804 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:44.514607 systemd-logind[1446]: New session 23 of user core. Sep 13 00:18:44.525770 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:18:44.661518 sshd[6168]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:44.666613 systemd[1]: sshd@22-10.0.0.148:22-10.0.0.1:35608.service: Deactivated successfully. Sep 13 00:18:44.669023 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:18:44.669805 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:18:44.670929 systemd-logind[1446]: Removed session 23. Sep 13 00:18:49.677396 systemd[1]: Started sshd@23-10.0.0.148:22-10.0.0.1:35622.service - OpenSSH per-connection server daemon (10.0.0.1:35622). Sep 13 00:18:49.717823 sshd[6205]: Accepted publickey for core from 10.0.0.1 port 35622 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:49.719683 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:49.723941 systemd-logind[1446]: New session 24 of user core. Sep 13 00:18:49.734706 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:18:49.874063 sshd[6205]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:49.878813 systemd[1]: sshd@23-10.0.0.148:22-10.0.0.1:35622.service: Deactivated successfully. Sep 13 00:18:49.881028 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:18:49.881837 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:18:49.883026 systemd-logind[1446]: Removed session 24. Sep 13 00:18:54.890619 systemd[1]: Started sshd@24-10.0.0.148:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). Sep 13 00:18:54.934674 sshd[6221]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:18:54.936385 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:54.941026 systemd-logind[1446]: New session 25 of user core. Sep 13 00:18:54.951679 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:18:55.105823 sshd[6221]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:55.109864 systemd[1]: sshd@24-10.0.0.148:22-10.0.0.1:54336.service: Deactivated successfully. Sep 13 00:18:55.112166 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:18:55.112812 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:18:55.113888 systemd-logind[1446]: Removed session 25. Sep 13 00:19:00.125281 systemd[1]: Started sshd@25-10.0.0.148:22-10.0.0.1:40272.service - OpenSSH per-connection server daemon (10.0.0.1:40272). Sep 13 00:19:00.182488 sshd[6257]: Accepted publickey for core from 10.0.0.1 port 40272 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:19:00.184510 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:19:00.190656 systemd-logind[1446]: New session 26 of user core. Sep 13 00:19:00.195864 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:19:00.465574 sshd[6257]: pam_unix(sshd:session): session closed for user core Sep 13 00:19:00.473649 systemd[1]: sshd@25-10.0.0.148:22-10.0.0.1:40272.service: Deactivated successfully. Sep 13 00:19:00.478431 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:19:00.480231 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:19:00.481442 systemd-logind[1446]: Removed session 26.