Mar 6 01:41:34.322786 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:41:34.322810 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:41:34.322823 kernel: BIOS-provided physical RAM map: Mar 6 01:41:34.322829 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 01:41:34.322834 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 6 01:41:34.322839 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 6 01:41:34.322846 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 6 01:41:34.322851 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 6 01:41:34.322857 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 6 01:41:34.322862 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 6 01:41:34.322871 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 6 01:41:34.322876 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 6 01:41:34.322882 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 6 01:41:34.322887 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 6 01:41:34.322894 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 6 01:41:34.322900 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 6 01:41:34.322909 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 6 01:41:34.322914 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 6 01:41:34.322920 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 6 01:41:34.322926 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:41:34.322932 kernel: NX (Execute Disable) protection: active Mar 6 01:41:34.322937 kernel: APIC: Static calls initialized Mar 6 01:41:34.322997 kernel: efi: EFI v2.7 by EDK II Mar 6 01:41:34.323006 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 6 01:41:34.323012 kernel: SMBIOS 2.8 present. Mar 6 01:41:34.323018 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 6 01:41:34.323024 kernel: Hypervisor detected: KVM Mar 6 01:41:34.323467 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:41:34.323476 kernel: kvm-clock: using sched offset of 5963759946 cycles Mar 6 01:41:34.323482 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:41:34.323489 kernel: tsc: Detected 2445.426 MHz processor Mar 6 01:41:34.323495 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:41:34.323502 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:41:34.323508 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 6 01:41:34.323514 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 6 01:41:34.323520 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:41:34.323530 kernel: Using GB pages for direct mapping Mar 6 01:41:34.323536 kernel: Secure boot disabled Mar 6 01:41:34.323542 kernel: ACPI: Early table checksum verification disabled Mar 6 01:41:34.323548 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 6 01:41:34.323558 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 6 01:41:34.323564 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:34.323571 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:34.323579 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 6 01:41:34.323586 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:34.323592 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:34.323599 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:34.323605 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:34.323611 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 6 01:41:34.323618 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 6 01:41:34.323626 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 6 01:41:34.323637 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 6 01:41:34.323644 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 6 01:41:34.323651 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 6 01:41:34.323657 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 6 01:41:34.323663 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 6 01:41:34.323669 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 6 01:41:34.323676 kernel: No NUMA configuration found Mar 6 01:41:34.323682 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 6 01:41:34.323754 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 6 01:41:34.323762 kernel: Zone ranges: Mar 6 01:41:34.323768 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:41:34.323775 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 6 01:41:34.323781 kernel: Normal empty Mar 6 01:41:34.323787 kernel: Movable zone start for each node Mar 6 01:41:34.323793 kernel: Early memory node ranges Mar 6 01:41:34.323800 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 6 01:41:34.323806 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 6 01:41:34.323812 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 6 01:41:34.323821 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 6 01:41:34.323827 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 6 01:41:34.323834 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 6 01:41:34.323840 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 6 01:41:34.323847 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:41:34.323853 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 6 01:41:34.323859 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 6 01:41:34.323866 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:41:34.323872 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 6 01:41:34.323880 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 6 01:41:34.323886 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 6 01:41:34.323893 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:41:34.323899 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:41:34.323905 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:41:34.323912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:41:34.323918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:41:34.323924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:41:34.323931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:41:34.323937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:41:34.324014 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:41:34.324022 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:41:34.324028 kernel: TSC deadline timer available Mar 6 01:41:34.324035 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:41:34.324041 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:41:34.324047 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:41:34.324054 kernel: kvm-guest: setup PV sched yield Mar 6 01:41:34.324060 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 6 01:41:34.324067 kernel: Booting paravirtualized kernel on KVM Mar 6 01:41:34.324076 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:41:34.324083 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:41:34.324090 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:41:34.324096 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:41:34.324102 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:41:34.324109 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:41:34.324115 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:41:34.324122 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:41:34.324131 kernel: random: crng init done Mar 6 01:41:34.324138 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:41:34.324144 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:41:34.324151 kernel: Fallback order for Node 0: 0 Mar 6 01:41:34.324157 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 6 01:41:34.324164 kernel: Policy zone: DMA32 Mar 6 01:41:34.324170 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:41:34.324176 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 6 01:41:34.324183 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:41:34.324191 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:41:34.324198 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:41:34.324204 kernel: Dynamic Preempt: voluntary Mar 6 01:41:34.324210 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:41:34.324226 kernel: rcu: RCU event tracing is enabled. Mar 6 01:41:34.324236 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:41:34.324242 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:41:34.324249 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:41:34.324258 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:41:34.324270 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:41:34.324282 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:41:34.324295 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:41:34.324311 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:41:34.324318 kernel: Console: colour dummy device 80x25 Mar 6 01:41:34.324324 kernel: printk: console [ttyS0] enabled Mar 6 01:41:34.324331 kernel: ACPI: Core revision 20230628 Mar 6 01:41:34.324338 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:41:34.324348 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:41:34.324354 kernel: x2apic enabled Mar 6 01:41:34.324361 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:41:34.324368 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:41:34.324375 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:41:34.324385 kernel: kvm-guest: setup PV IPIs Mar 6 01:41:34.324396 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:41:34.324409 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:41:34.324419 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 6 01:41:34.324436 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:41:34.324787 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:41:34.324892 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:41:34.324904 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:41:34.324915 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:41:34.324926 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:41:34.324937 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:41:34.325033 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:41:34.325049 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:41:34.325477 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:41:34.325499 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:41:34.325511 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:41:34.325523 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:41:34.325535 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:41:34.325546 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:41:34.325558 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:41:34.325569 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:41:34.325619 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:41:34.325632 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:41:34.325645 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:41:34.325655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:41:34.325667 kernel: landlock: Up and running. Mar 6 01:41:34.325680 kernel: SELinux: Initializing. Mar 6 01:41:34.325761 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:41:34.325775 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:41:34.325783 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:41:34.325801 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:41:34.325808 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:41:34.325814 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:41:34.325822 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:41:34.325829 kernel: signal: max sigframe size: 1776 Mar 6 01:41:34.325835 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:41:34.325843 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:41:34.325849 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:41:34.325856 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:41:34.325866 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:41:34.325872 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:41:34.325879 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:41:34.325885 kernel: smpboot: Max logical packages: 1 Mar 6 01:41:34.325892 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 6 01:41:34.325899 kernel: devtmpfs: initialized Mar 6 01:41:34.325905 kernel: x86/mm: Memory block size: 128MB Mar 6 01:41:34.325912 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 6 01:41:34.325919 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 6 01:41:34.325928 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 6 01:41:34.325935 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 6 01:41:34.325942 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 6 01:41:34.326014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:41:34.326027 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:41:34.326039 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:41:34.326051 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:41:34.326064 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:41:34.326076 kernel: audit: type=2000 audit(1772761290.739:1): state=initialized audit_enabled=0 res=1 Mar 6 01:41:34.326094 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:41:34.326106 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:41:34.326118 kernel: cpuidle: using governor menu Mar 6 01:41:34.326131 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:41:34.326143 kernel: dca service started, version 1.12.1 Mar 6 01:41:34.326156 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:41:34.326165 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:41:34.326171 kernel: PCI: Using configuration type 1 for base access Mar 6 01:41:34.326178 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:41:34.326189 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:41:34.326195 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:41:34.326202 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:41:34.326209 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:41:34.326216 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:41:34.326222 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:41:34.326229 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:41:34.326242 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:41:34.326254 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:41:34.326270 kernel: ACPI: Interpreter enabled Mar 6 01:41:34.326278 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:41:34.326285 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:41:34.326292 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:41:34.326298 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:41:34.326305 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:41:34.326312 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:41:34.326790 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:41:34.327047 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:41:34.327249 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:41:34.327266 kernel: PCI host bridge to bus 0000:00 Mar 6 01:41:34.327454 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:41:34.327578 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:41:34.327759 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:41:34.327877 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:41:34.328129 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:41:34.328304 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 6 01:41:34.328425 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:41:34.328667 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:41:34.328907 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:41:34.329164 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 6 01:41:34.329348 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 6 01:41:34.329476 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 6 01:41:34.329662 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 6 01:41:34.329888 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:41:34.330236 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:41:34.330410 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 6 01:41:34.330595 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 6 01:41:34.330849 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 6 01:41:34.331112 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:41:34.331240 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 6 01:41:34.331363 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 6 01:41:34.331539 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 6 01:41:34.331804 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:41:34.332101 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 6 01:41:34.332277 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 6 01:41:34.332475 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 6 01:41:34.332662 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 6 01:41:34.333045 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:41:34.333226 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:41:34.333408 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:41:34.333560 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 6 01:41:34.333806 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 6 01:41:34.334099 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:41:34.334244 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 6 01:41:34.334255 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:41:34.334262 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:41:34.334269 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:41:34.334276 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:41:34.334288 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:41:34.334294 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:41:34.334301 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:41:34.334308 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:41:34.334314 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:41:34.334321 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:41:34.334328 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:41:34.334334 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:41:34.334341 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:41:34.334350 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:41:34.334357 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:41:34.334364 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:41:34.334370 kernel: iommu: Default domain type: Translated Mar 6 01:41:34.334377 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:41:34.334384 kernel: efivars: Registered efivars operations Mar 6 01:41:34.334391 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:41:34.334398 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:41:34.334405 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 6 01:41:34.334414 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 6 01:41:34.334420 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 6 01:41:34.334427 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 6 01:41:34.334612 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:41:34.334829 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:41:34.335129 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:41:34.335143 kernel: vgaarb: loaded Mar 6 01:41:34.335151 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:41:34.335158 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:41:34.335170 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:41:34.335177 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:41:34.335184 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:41:34.335191 kernel: pnp: PnP ACPI init Mar 6 01:41:34.335324 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:41:34.335336 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:41:34.335343 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:41:34.335349 kernel: NET: Registered PF_INET protocol family Mar 6 01:41:34.335360 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:41:34.335367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:41:34.335374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:41:34.335380 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:41:34.335387 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:41:34.335394 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:41:34.335401 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:41:34.335408 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:41:34.335415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:41:34.335424 kernel: NET: Registered PF_XDP protocol family Mar 6 01:41:34.335546 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 6 01:41:34.335749 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 6 01:41:34.335876 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:41:34.336070 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:41:34.336185 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:41:34.336296 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:41:34.336412 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:41:34.336522 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 6 01:41:34.336531 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:41:34.336538 kernel: Initialise system trusted keyrings Mar 6 01:41:34.336545 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:41:34.336551 kernel: Key type asymmetric registered Mar 6 01:41:34.336558 kernel: Asymmetric key parser 'x509' registered Mar 6 01:41:34.336565 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:41:34.336571 kernel: io scheduler mq-deadline registered Mar 6 01:41:34.336581 kernel: io scheduler kyber registered Mar 6 01:41:34.336588 kernel: io scheduler bfq registered Mar 6 01:41:34.336595 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:41:34.336602 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:41:34.336610 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:41:34.336617 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:41:34.336623 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:41:34.336630 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:41:34.336637 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:41:34.336646 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:41:34.336653 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:41:34.337035 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:41:34.337050 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:41:34.337167 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:41:34.337283 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:41:33 UTC (1772761293) Mar 6 01:41:34.337395 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:41:34.337404 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:41:34.337415 kernel: efifb: probing for efifb Mar 6 01:41:34.337422 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 6 01:41:34.337429 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 6 01:41:34.337436 kernel: efifb: scrolling: redraw Mar 6 01:41:34.337442 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 6 01:41:34.337449 kernel: Console: switching to colour frame buffer device 100x37 Mar 6 01:41:34.337456 kernel: fb0: EFI VGA frame buffer device Mar 6 01:41:34.337463 kernel: pstore: Using crash dump compression: deflate Mar 6 01:41:34.337470 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 01:41:34.337479 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:41:34.337486 kernel: Segment Routing with IPv6 Mar 6 01:41:34.337492 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:41:34.337499 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:41:34.337506 kernel: Key type dns_resolver registered Mar 6 01:41:34.337512 kernel: IPI shorthand broadcast: enabled Mar 6 01:41:34.337538 kernel: sched_clock: Marking stable (2913033030, 481724509)->(3927332061, -532574522) Mar 6 01:41:34.337548 kernel: registered taskstats version 1 Mar 6 01:41:34.337555 kernel: Loading compiled-in X.509 certificates Mar 6 01:41:34.337564 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:41:34.337571 kernel: Key type .fscrypt registered Mar 6 01:41:34.337578 kernel: Key type fscrypt-provisioning registered Mar 6 01:41:34.337585 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:41:34.337592 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:41:34.337599 kernel: ima: No architecture policies found Mar 6 01:41:34.337606 kernel: clk: Disabling unused clocks Mar 6 01:41:34.337613 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:41:34.337620 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:41:34.337629 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:41:34.337636 kernel: Run /init as init process Mar 6 01:41:34.337643 kernel: with arguments: Mar 6 01:41:34.337650 kernel: /init Mar 6 01:41:34.337657 kernel: with environment: Mar 6 01:41:34.337664 kernel: HOME=/ Mar 6 01:41:34.337670 kernel: TERM=linux Mar 6 01:41:34.337679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:41:34.337747 systemd[1]: Detected virtualization kvm. Mar 6 01:41:34.337755 systemd[1]: Detected architecture x86-64. Mar 6 01:41:34.337762 systemd[1]: Running in initrd. Mar 6 01:41:34.337769 systemd[1]: No hostname configured, using default hostname. Mar 6 01:41:34.337776 systemd[1]: Hostname set to . Mar 6 01:41:34.337784 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:41:34.337791 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:41:34.337798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:41:34.337809 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:41:34.337817 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:41:34.337825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:41:34.337832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:41:34.337848 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:41:34.337869 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:41:34.337884 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:41:34.337898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:41:34.337906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:41:34.337913 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:41:34.337925 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:41:34.337932 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:41:34.337942 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:41:34.338021 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:41:34.338029 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:41:34.338037 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:41:34.338045 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:41:34.338052 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:41:34.338060 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:41:34.338067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:41:34.338078 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:41:34.338085 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:41:34.338092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:41:34.338100 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:41:34.338107 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:41:34.338114 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:41:34.338122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:41:34.338129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:34.338162 systemd-journald[193]: Collecting audit messages is disabled. Mar 6 01:41:34.338198 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:41:34.338212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:41:34.338220 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:41:34.338232 systemd-journald[193]: Journal started Mar 6 01:41:34.338249 systemd-journald[193]: Runtime Journal (/run/log/journal/dfcff9163a7b46cc9afa31b643c0ac06) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:41:34.348507 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:41:34.365138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:41:34.377183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:41:34.382901 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:41:34.401172 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:41:34.414280 systemd-modules-load[195]: Inserted module 'overlay' Mar 6 01:41:34.418939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:34.429840 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:41:34.445839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:41:34.478171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:41:34.482103 kernel: Bridge firewalling registered Mar 6 01:41:34.482135 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 6 01:41:34.486633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:41:34.489826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:41:34.506484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:41:34.528629 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:41:34.536455 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:34.551151 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:41:34.559907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:41:34.571823 dracut-cmdline[232]: dracut-dracut-053 Mar 6 01:41:34.576760 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:41:34.636338 systemd-resolved[235]: Positive Trust Anchors: Mar 6 01:41:34.636386 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:41:34.636433 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:41:34.642310 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 6 01:41:34.644319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:41:34.654524 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:41:34.718063 kernel: SCSI subsystem initialized Mar 6 01:41:34.733071 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:41:34.751138 kernel: iscsi: registered transport (tcp) Mar 6 01:41:34.782780 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:41:34.782898 kernel: QLogic iSCSI HBA Driver Mar 6 01:41:34.868391 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:41:34.881356 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:41:34.943237 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:41:34.943411 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:41:34.949912 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:41:35.004078 kernel: raid6: avx2x4 gen() 29002 MB/s Mar 6 01:41:35.022075 kernel: raid6: avx2x2 gen() 24413 MB/s Mar 6 01:41:35.042212 kernel: raid6: avx2x1 gen() 15988 MB/s Mar 6 01:41:35.042294 kernel: raid6: using algorithm avx2x4 gen() 29002 MB/s Mar 6 01:41:35.063829 kernel: raid6: .... xor() 4705 MB/s, rmw enabled Mar 6 01:41:35.063913 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:41:35.089069 kernel: xor: automatically using best checksumming function avx Mar 6 01:41:35.397336 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:41:35.418542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:41:35.444618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:41:35.475437 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 6 01:41:35.484376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:41:35.496340 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:41:35.515353 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 6 01:41:35.565904 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:41:35.588653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:41:35.718073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:41:35.747783 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:41:35.783281 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:41:35.773662 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:41:35.789128 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:41:35.804680 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:41:35.805143 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:41:35.813602 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:41:35.840738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:41:35.840773 kernel: GPT:9289727 != 19775487 Mar 6 01:41:35.840825 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:41:35.840836 kernel: GPT:9289727 != 19775487 Mar 6 01:41:35.840845 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:41:35.840855 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:35.848331 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:41:35.864229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:41:35.864560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:35.871869 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:41:35.898563 kernel: libata version 3.00 loaded. Mar 6 01:41:35.898595 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:41:35.878077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:41:35.878227 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:35.902188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:35.937074 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:41:35.937110 kernel: AES CTR mode by8 optimization enabled Mar 6 01:41:35.918666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:35.957119 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:41:35.976071 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Mar 6 01:41:35.986337 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:41:35.986582 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (470) Mar 6 01:41:35.986597 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:41:35.989758 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:41:36.029268 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:41:36.029538 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:41:36.029804 kernel: scsi host0: ahci Mar 6 01:41:36.030094 kernel: scsi host1: ahci Mar 6 01:41:36.030287 kernel: scsi host2: ahci Mar 6 01:41:36.030465 kernel: scsi host3: ahci Mar 6 01:41:36.030618 kernel: scsi host4: ahci Mar 6 01:41:36.030830 kernel: scsi host5: ahci Mar 6 01:41:36.031053 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 6 01:41:36.031066 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 6 01:41:36.031075 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 6 01:41:36.007811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:36.060845 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 6 01:41:36.060881 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 6 01:41:36.060897 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 6 01:41:36.063084 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:41:36.082898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:41:36.093810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:41:36.099136 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:41:36.125218 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:41:36.131018 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:41:36.150324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:36.150353 disk-uuid[559]: Primary Header is updated. Mar 6 01:41:36.150353 disk-uuid[559]: Secondary Entries is updated. Mar 6 01:41:36.150353 disk-uuid[559]: Secondary Header is updated. Mar 6 01:41:36.168265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:36.346022 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:36.350066 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:41:36.350107 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:36.362070 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:36.366035 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:36.366101 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:41:36.370146 kernel: ata3.00: applying bridge limits Mar 6 01:41:36.371081 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:36.374073 kernel: ata3.00: configured for UDMA/100 Mar 6 01:41:36.380109 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:41:36.455620 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:41:36.456373 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:41:36.471126 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:41:37.168112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:37.169202 disk-uuid[561]: The operation has completed successfully. Mar 6 01:41:37.213378 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:41:37.213563 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:41:37.256269 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:41:37.274414 sh[596]: Success Mar 6 01:41:37.301022 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:41:37.362161 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:41:37.386547 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:41:37.398237 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:41:37.419819 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:41:37.419889 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:37.419910 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:41:37.425461 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:41:37.429362 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:41:37.446870 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:41:37.449499 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:41:37.472389 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:41:37.477451 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:41:37.517620 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:37.517675 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:37.517686 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:41:37.529110 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:41:37.544086 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:41:37.558048 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:37.565459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:41:37.577230 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:41:37.660583 ignition[702]: Ignition 2.19.0 Mar 6 01:41:37.660629 ignition[702]: Stage: fetch-offline Mar 6 01:41:37.660680 ignition[702]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:37.660748 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:37.660901 ignition[702]: parsed url from cmdline: "" Mar 6 01:41:37.660909 ignition[702]: no config URL provided Mar 6 01:41:37.660918 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:41:37.660935 ignition[702]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:41:37.661071 ignition[702]: op(1): [started] loading QEMU firmware config module Mar 6 01:41:37.661081 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:41:37.699013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:41:37.716066 ignition[702]: op(1): [finished] loading QEMU firmware config module Mar 6 01:41:37.721208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:41:37.766593 systemd-networkd[784]: lo: Link UP Mar 6 01:41:37.766639 systemd-networkd[784]: lo: Gained carrier Mar 6 01:41:37.769613 systemd-networkd[784]: Enumeration completed Mar 6 01:41:37.771162 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:41:37.773561 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:37.773568 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:41:37.775200 systemd-networkd[784]: eth0: Link UP Mar 6 01:41:37.775205 systemd-networkd[784]: eth0: Gained carrier Mar 6 01:41:37.775212 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:37.777157 systemd[1]: Reached target network.target - Network. Mar 6 01:41:37.860103 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:41:38.048177 ignition[702]: parsing config with SHA512: 9c8683ba4864683f41e7060a251d6b09037b9f20db947cc3447f78bca70917944e929547f5e019bfab9136e3144ef1cb0c5b31408b596b5c52e845ad7a0ca3ea Mar 6 01:41:38.056208 unknown[702]: fetched base config from "system" Mar 6 01:41:38.057067 unknown[702]: fetched user config from "qemu" Mar 6 01:41:38.057599 ignition[702]: fetch-offline: fetch-offline passed Mar 6 01:41:38.057686 ignition[702]: Ignition finished successfully Mar 6 01:41:38.071398 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:41:38.074077 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:41:38.094346 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:41:38.115554 ignition[788]: Ignition 2.19.0 Mar 6 01:41:38.115588 ignition[788]: Stage: kargs Mar 6 01:41:38.115838 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:38.115851 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:38.117043 ignition[788]: kargs: kargs passed Mar 6 01:41:38.117132 ignition[788]: Ignition finished successfully Mar 6 01:41:38.139863 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:41:38.158222 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:41:38.189437 ignition[796]: Ignition 2.19.0 Mar 6 01:41:38.189475 ignition[796]: Stage: disks Mar 6 01:41:38.189663 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:38.189697 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:38.202445 ignition[796]: disks: disks passed Mar 6 01:41:38.202534 ignition[796]: Ignition finished successfully Mar 6 01:41:38.210842 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:41:38.218447 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:41:38.220038 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:41:38.230863 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:41:38.240114 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:41:38.243653 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:41:38.267503 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:41:38.294345 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:41:38.303684 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:41:38.332064 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:41:38.476044 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:41:38.477869 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:41:38.483888 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:41:38.512774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:41:38.519387 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:41:38.521041 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:41:38.541062 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 6 01:41:38.521097 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:41:38.567188 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:38.567223 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:38.567234 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:41:38.567244 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:41:38.521125 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:41:38.571666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:41:38.612940 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:41:38.634284 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:41:38.733853 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:41:38.743382 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:41:38.753672 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:41:38.762831 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:41:39.028175 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:41:39.048578 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:41:39.063865 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:41:39.075797 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:41:39.085175 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:39.125458 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:41:39.137560 ignition[927]: INFO : Ignition 2.19.0 Mar 6 01:41:39.137560 ignition[927]: INFO : Stage: mount Mar 6 01:41:39.143396 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:39.143396 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:39.143396 ignition[927]: INFO : mount: mount passed Mar 6 01:41:39.143396 ignition[927]: INFO : Ignition finished successfully Mar 6 01:41:39.160057 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:41:39.176131 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:41:39.417597 systemd-networkd[784]: eth0: Gained IPv6LL Mar 6 01:41:39.493548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:41:39.515170 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 6 01:41:39.523345 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:39.523488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:39.523541 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:41:39.535207 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:41:39.539101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:41:39.633453 ignition[957]: INFO : Ignition 2.19.0 Mar 6 01:41:39.633453 ignition[957]: INFO : Stage: files Mar 6 01:41:39.640996 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:39.640996 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:39.640996 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:41:39.658560 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:41:39.658560 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:41:39.671697 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:41:39.677672 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:41:39.677672 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:41:39.677672 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:41:39.677672 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:41:39.673403 unknown[957]: wrote ssh authorized keys file for user: core Mar 6 01:41:39.755162 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 01:41:39.944506 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:41:39.944506 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:41:39.959466 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 01:41:40.259511 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 01:41:41.976837 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:41:41.976837 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 01:41:41.991127 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:41:41.998764 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:41:41.998764 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 01:41:42.010478 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 6 01:41:42.010478 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:41:42.022891 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:41:42.022891 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 6 01:41:42.022891 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:41:42.076301 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:41:42.090777 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:41:42.096380 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:41:42.096380 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:41:42.096380 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:41:42.096380 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:41:42.096380 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:41:42.096380 ignition[957]: INFO : files: files passed Mar 6 01:41:42.096380 ignition[957]: INFO : Ignition finished successfully Mar 6 01:41:42.135389 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:41:42.156375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:41:42.159248 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:41:42.176304 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:41:42.182025 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:41:42.182025 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:41:42.181384 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:41:42.222570 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:41:42.185422 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:41:42.227559 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:41:42.252367 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:41:42.255684 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:41:42.333554 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:41:42.333808 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:41:42.341265 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:41:42.348514 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:41:42.352618 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:41:42.377257 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:41:42.394344 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:41:42.412864 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:41:42.428541 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:41:42.437710 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:41:42.447528 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:41:42.455293 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:41:42.459417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:41:42.469179 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:41:42.477514 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:41:42.485217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:41:42.494852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:41:42.505850 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:41:42.516589 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:41:42.526607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:41:42.539072 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:41:42.548799 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:41:42.553517 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:41:42.558135 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:41:42.558380 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:41:42.564493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:41:42.589929 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:41:42.594551 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:41:42.606291 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:41:42.617769 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:41:42.618094 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:41:42.623657 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:41:42.624010 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:41:42.642051 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:41:42.646904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:41:42.662243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:41:42.675019 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:41:42.687554 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:41:42.713151 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:41:42.713262 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:41:42.737296 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:41:42.737460 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:41:42.750118 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:41:42.750351 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:41:42.755312 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:41:42.755457 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:41:42.858385 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:41:42.878838 ignition[1011]: INFO : Ignition 2.19.0 Mar 6 01:41:42.878838 ignition[1011]: INFO : Stage: umount Mar 6 01:41:42.878838 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:42.878838 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:42.865825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:41:42.908337 ignition[1011]: INFO : umount: umount passed Mar 6 01:41:42.908337 ignition[1011]: INFO : Ignition finished successfully Mar 6 01:41:42.866118 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:41:42.882476 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:41:42.888321 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:41:42.888481 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:41:42.892003 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:41:42.892202 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:41:42.911802 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:41:42.912133 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:41:42.919881 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:41:42.920103 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:41:42.927417 systemd[1]: Stopped target network.target - Network. Mar 6 01:41:42.938029 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:41:42.938109 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:41:42.942428 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:41:42.942477 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:41:42.945112 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:41:42.945174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:41:42.962281 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:41:42.962383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:41:42.972327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:41:42.983369 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:41:42.998429 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:41:42.998623 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:41:43.003094 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:41:43.003177 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:41:43.038199 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 6 01:41:43.044313 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:41:43.044601 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:41:43.051601 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:41:43.051658 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:41:43.088264 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:41:43.092815 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:41:43.092889 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:41:43.103913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:41:43.104056 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:41:43.107915 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:41:43.108053 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:41:43.122517 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:41:43.146014 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:41:43.146275 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:41:43.151703 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:41:43.151914 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:41:43.161652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:41:43.161786 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:41:43.167566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:41:43.167610 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:41:43.173035 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:41:43.173091 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:41:43.185271 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:41:43.185338 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:41:43.196249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:41:43.196336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:43.225279 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:41:43.233544 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:41:43.233698 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:41:43.246281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:41:43.246378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:43.252545 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:41:43.252896 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:41:43.409147 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:41:43.580919 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:41:43.581411 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:41:43.602157 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:41:43.606061 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:41:43.606153 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:41:43.656677 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:41:43.677825 systemd[1]: Switching root. Mar 6 01:41:43.721770 systemd-journald[193]: Journal stopped Mar 6 01:41:46.340187 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 6 01:41:46.340282 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:41:46.340301 kernel: SELinux: policy capability open_perms=1 Mar 6 01:41:46.340317 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:41:46.340355 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:41:46.340371 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:41:46.340435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:41:46.340456 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:41:46.340472 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:41:46.340488 kernel: audit: type=1403 audit(1772761304.019:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:41:46.340505 systemd[1]: Successfully loaded SELinux policy in 73.622ms. Mar 6 01:41:46.340529 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.262ms. Mar 6 01:41:46.340547 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:41:46.340564 systemd[1]: Detected virtualization kvm. Mar 6 01:41:46.340580 systemd[1]: Detected architecture x86-64. Mar 6 01:41:46.340597 systemd[1]: Detected first boot. Mar 6 01:41:46.340617 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:41:46.340635 zram_generator::config[1055]: No configuration found. Mar 6 01:41:46.340663 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:41:46.340680 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 01:41:46.340697 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 01:41:46.340713 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 01:41:46.340780 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:41:46.340800 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:41:46.340826 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:41:46.340846 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:41:46.340865 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:41:46.340941 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:41:46.341109 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:41:46.341127 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:41:46.341148 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:41:46.341166 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:41:46.341185 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:41:46.341212 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:41:46.341231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:41:46.341252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:41:46.341269 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:41:46.341288 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:41:46.341307 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 01:41:46.341326 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 01:41:46.341345 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 01:41:46.341369 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:41:46.341390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:41:46.341408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:41:46.341427 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:41:46.341445 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:41:46.341462 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:41:46.341480 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:41:46.341550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:41:46.341568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:41:46.341593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:41:46.341612 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:41:46.341632 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:41:46.341650 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:41:46.341667 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:41:46.341687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:46.341706 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:41:46.341724 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:41:46.341801 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:41:46.341820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:41:46.341837 systemd[1]: Reached target machines.target - Containers. Mar 6 01:41:46.341854 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:41:46.341870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:41:46.341887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:41:46.341904 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:41:46.341921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:41:46.341938 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:41:46.342024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:41:46.342042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:41:46.342064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:41:46.342081 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:41:46.342097 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 01:41:46.342114 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 01:41:46.342131 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 01:41:46.342148 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 01:41:46.342168 kernel: ACPI: bus type drm_connector registered Mar 6 01:41:46.342184 kernel: fuse: init (API version 7.39) Mar 6 01:41:46.342204 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:41:46.342221 kernel: loop: module loaded Mar 6 01:41:46.342237 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:41:46.342254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:41:46.342297 systemd-journald[1139]: Collecting audit messages is disabled. Mar 6 01:41:46.342327 systemd-journald[1139]: Journal started Mar 6 01:41:46.342359 systemd-journald[1139]: Runtime Journal (/run/log/journal/dfcff9163a7b46cc9afa31b643c0ac06) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:41:45.240131 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:41:45.279495 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:41:45.280436 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 01:41:45.281104 systemd[1]: systemd-journald.service: Consumed 2.053s CPU time. Mar 6 01:41:46.361563 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:41:46.378253 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:41:46.394864 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 01:41:46.394917 systemd[1]: Stopped verity-setup.service. Mar 6 01:41:46.412051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:46.425851 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:41:46.431680 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:41:46.440073 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:41:46.449090 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:41:46.460262 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:41:46.470089 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:41:46.478329 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:41:46.484375 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:41:46.493459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:41:46.500933 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:41:46.501900 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:41:46.511521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:41:46.511887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:41:46.518872 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:41:46.519215 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:41:46.524654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:41:46.525438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:41:46.533471 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:41:46.533803 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:41:46.539493 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:41:46.539798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:41:46.544694 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:41:46.549249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:41:46.556926 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:41:46.564223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:41:46.596437 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:41:46.614224 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:41:46.630895 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:41:46.634369 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:41:46.634407 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:41:46.641404 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:41:46.647537 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:41:46.655783 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:41:46.660788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:41:46.663463 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:41:46.670136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:41:46.675195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:41:46.677348 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:41:46.682473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:41:46.690126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:41:46.696097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:41:46.700391 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:41:46.708208 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:41:46.716195 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:41:46.721665 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:41:46.728586 systemd-journald[1139]: Time spent on flushing to /var/log/journal/dfcff9163a7b46cc9afa31b643c0ac06 is 25.010ms for 984 entries. Mar 6 01:41:46.728586 systemd-journald[1139]: System Journal (/var/log/journal/dfcff9163a7b46cc9afa31b643c0ac06) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:41:46.776294 systemd-journald[1139]: Received client request to flush runtime journal. Mar 6 01:41:46.735885 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:41:46.743132 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:41:46.764184 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:41:46.780320 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:41:46.789074 kernel: loop0: detected capacity change from 0 to 142488 Mar 6 01:41:46.790379 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:41:46.800460 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:41:46.811289 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 6 01:41:46.825182 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:41:46.840048 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:41:46.845346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:41:46.852434 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:41:46.853571 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:41:46.881032 kernel: loop1: detected capacity change from 0 to 140768 Mar 6 01:41:46.881263 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 6 01:41:46.881599 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 6 01:41:46.888788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:41:46.933215 kernel: loop2: detected capacity change from 0 to 228704 Mar 6 01:41:46.974176 kernel: loop3: detected capacity change from 0 to 142488 Mar 6 01:41:47.002330 kernel: loop4: detected capacity change from 0 to 140768 Mar 6 01:41:47.028598 kernel: loop5: detected capacity change from 0 to 228704 Mar 6 01:41:47.043054 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:41:47.043717 (sd-merge)[1193]: Merged extensions into '/usr'. Mar 6 01:41:47.048571 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:41:47.048729 systemd[1]: Reloading... Mar 6 01:41:47.116010 zram_generator::config[1222]: No configuration found. Mar 6 01:41:47.313556 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:41:47.375124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:41:47.453478 systemd[1]: Reloading finished in 403 ms. Mar 6 01:41:47.493893 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:41:47.500665 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:41:47.509452 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:41:47.557485 systemd[1]: Starting ensure-sysext.service... Mar 6 01:41:47.565064 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:41:47.581556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:41:47.590481 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:41:47.590612 systemd[1]: Reloading... Mar 6 01:41:47.620522 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:41:47.621695 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:41:47.623444 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:41:47.623703 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 6 01:41:47.623864 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 6 01:41:47.628904 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:41:47.629104 systemd-tmpfiles[1258]: Skipping /boot Mar 6 01:41:47.649484 systemd-udevd[1259]: Using default interface naming scheme 'v255'. Mar 6 01:41:47.649908 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:41:47.650081 systemd-tmpfiles[1258]: Skipping /boot Mar 6 01:41:47.679092 zram_generator::config[1285]: No configuration found. Mar 6 01:41:47.774359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1306) Mar 6 01:41:47.852358 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:41:47.860022 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:41:47.863335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:41:47.894527 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 6 01:41:47.894867 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:41:47.895119 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:41:47.895141 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:41:47.896892 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:41:47.929072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:41:47.934311 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 01:41:47.935131 systemd[1]: Reloading finished in 343 ms. Mar 6 01:41:47.942032 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:41:47.956249 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:41:47.961881 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:41:47.999614 systemd[1]: Finished ensure-sysext.service. Mar 6 01:41:48.068815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:48.084216 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:41:48.093187 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:41:48.098043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:41:48.102338 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:41:48.108220 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:41:48.117209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:41:48.124541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:41:48.130418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:41:48.137035 kernel: kvm_amd: TSC scaling supported Mar 6 01:41:48.137103 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:41:48.137122 kernel: kvm_amd: Nested Paging enabled Mar 6 01:41:48.138092 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:41:48.138131 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:41:48.145498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:41:48.149297 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:41:48.162617 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:41:48.174389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:41:48.184585 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:41:48.197218 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:41:48.205285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:48.206840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:48.207922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:41:48.208328 augenrules[1382]: No rules Mar 6 01:41:48.208254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:41:48.217097 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:41:48.219286 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:41:48.219486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:41:48.220229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:41:48.220422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:41:48.232424 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:41:48.233555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:41:48.238799 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:41:48.244784 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:41:48.274501 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:41:48.281532 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:41:48.295687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:41:48.296492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:41:48.311918 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:41:48.323437 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:41:48.324250 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:41:48.325855 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:41:48.344598 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:41:48.363521 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:41:48.374408 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:41:48.387523 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:41:48.392438 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:41:48.417815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:48.430132 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:41:48.437456 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:41:48.450326 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:41:48.461294 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:41:48.474154 systemd-networkd[1375]: lo: Link UP Mar 6 01:41:48.474562 systemd-networkd[1375]: lo: Gained carrier Mar 6 01:41:48.476477 systemd-networkd[1375]: Enumeration completed Mar 6 01:41:48.477129 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:41:48.477781 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:48.477837 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:41:48.479266 systemd-networkd[1375]: eth0: Link UP Mar 6 01:41:48.479323 systemd-networkd[1375]: eth0: Gained carrier Mar 6 01:41:48.479393 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:48.489406 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:41:48.490276 systemd-resolved[1378]: Positive Trust Anchors: Mar 6 01:41:48.490327 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:41:48.490354 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:41:48.496291 systemd-resolved[1378]: Defaulting to hostname 'linux'. Mar 6 01:41:48.498637 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:41:48.502466 systemd[1]: Reached target network.target - Network. Mar 6 01:41:48.505626 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:41:48.512094 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:41:48.513452 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:41:48.520297 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:41:48.524474 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:41:48.528216 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:41:48.532359 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:41:48.537248 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:41:48.541623 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:41:48.541681 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:41:48.544815 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:41:49.459227 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:41:49.459281 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:41:49.462534 systemd-resolved[1378]: Clock change detected. Flushing caches. Mar 6 01:41:49.463145 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:41:49.463182 systemd-timesyncd[1379]: Initial clock synchronization to Fri 2026-03-06 01:41:49.459180 UTC. Mar 6 01:41:49.467493 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:41:49.471281 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:41:49.477511 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:41:49.494068 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:41:49.498313 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:41:49.502167 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:41:49.505464 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:41:49.508593 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:41:49.508647 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:41:49.510093 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:41:49.514947 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:41:49.520833 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:41:49.527501 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:41:49.531866 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:41:49.534300 jq[1426]: false Mar 6 01:41:49.536197 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:41:49.542366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:41:49.550966 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:41:49.557801 extend-filesystems[1427]: Found loop3 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found loop4 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found loop5 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found sr0 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda1 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda2 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda3 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found usr Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda4 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda6 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda7 Mar 6 01:41:49.557801 extend-filesystems[1427]: Found vda9 Mar 6 01:41:49.557801 extend-filesystems[1427]: Checking size of /dev/vda9 Mar 6 01:41:49.714969 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1315) Mar 6 01:41:49.715000 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:41:49.561464 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:41:49.715113 extend-filesystems[1427]: Resized partition /dev/vda9 Mar 6 01:41:49.568513 dbus-daemon[1425]: [system] SELinux support is enabled Mar 6 01:41:49.570599 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:41:49.719223 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:41:49.574036 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:41:49.574490 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:41:49.722957 update_engine[1442]: I20260306 01:41:49.670090 1442 main.cc:92] Flatcar Update Engine starting Mar 6 01:41:49.722957 update_engine[1442]: I20260306 01:41:49.683745 1442 update_check_scheduler.cc:74] Next update check in 9m21s Mar 6 01:41:49.576178 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:41:49.583072 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:41:49.586225 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:41:49.723534 jq[1443]: true Mar 6 01:41:49.593226 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:41:49.593447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:41:49.724020 jq[1447]: true Mar 6 01:41:49.594836 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:41:49.724229 tar[1445]: linux-amd64/LICENSE Mar 6 01:41:49.724229 tar[1445]: linux-amd64/helm Mar 6 01:41:49.595037 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:41:49.610286 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:41:49.610311 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:41:49.611715 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:41:49.611734 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:41:49.626684 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:41:49.647282 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:41:49.647538 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:41:49.677501 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:41:49.690994 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:41:49.725887 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:41:49.749499 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:41:49.725910 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:41:49.730941 systemd-logind[1439]: New seat seat0. Mar 6 01:41:49.740205 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:41:49.753093 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:41:49.753093 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:41:49.753093 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:41:49.774273 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Mar 6 01:41:49.777455 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:41:49.757260 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:41:49.757633 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:41:49.777166 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:41:49.785051 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:41:49.796382 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:41:49.923887 containerd[1451]: time="2026-03-06T01:41:49.923641729Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:41:49.966915 containerd[1451]: time="2026-03-06T01:41:49.966814110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979026 containerd[1451]: time="2026-03-06T01:41:49.978897142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979026 containerd[1451]: time="2026-03-06T01:41:49.978952936Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:41:49.979026 containerd[1451]: time="2026-03-06T01:41:49.978975569Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:41:49.979444 containerd[1451]: time="2026-03-06T01:41:49.979236927Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:41:49.979444 containerd[1451]: time="2026-03-06T01:41:49.979281209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979444 containerd[1451]: time="2026-03-06T01:41:49.979384853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979444 containerd[1451]: time="2026-03-06T01:41:49.979403678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979994 containerd[1451]: time="2026-03-06T01:41:49.979720249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979994 containerd[1451]: time="2026-03-06T01:41:49.979823351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979994 containerd[1451]: time="2026-03-06T01:41:49.979848919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979994 containerd[1451]: time="2026-03-06T01:41:49.979867193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:49.979994 containerd[1451]: time="2026-03-06T01:41:49.979996013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:49.980514 containerd[1451]: time="2026-03-06T01:41:49.980298268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:49.980514 containerd[1451]: time="2026-03-06T01:41:49.980473325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:49.980514 containerd[1451]: time="2026-03-06T01:41:49.980492992Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:41:49.980912 containerd[1451]: time="2026-03-06T01:41:49.980686853Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:41:49.980912 containerd[1451]: time="2026-03-06T01:41:49.980875867Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:41:50.001132 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:41:50.020353 containerd[1451]: time="2026-03-06T01:41:50.018929470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:41:50.020353 containerd[1451]: time="2026-03-06T01:41:50.019022624Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:41:50.020353 containerd[1451]: time="2026-03-06T01:41:50.019045797Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:41:50.020353 containerd[1451]: time="2026-03-06T01:41:50.019066105Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:41:50.020353 containerd[1451]: time="2026-03-06T01:41:50.019084980Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:41:50.020353 containerd[1451]: time="2026-03-06T01:41:50.019310261Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:41:50.024100 containerd[1451]: time="2026-03-06T01:41:50.023683476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:41:50.024100 containerd[1451]: time="2026-03-06T01:41:50.024008843Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:41:50.024100 containerd[1451]: time="2026-03-06T01:41:50.024033970Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:41:50.024100 containerd[1451]: time="2026-03-06T01:41:50.024052114Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:41:50.024100 containerd[1451]: time="2026-03-06T01:41:50.024074816Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024100 containerd[1451]: time="2026-03-06T01:41:50.024091617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024110032Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024130741Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024162891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024179962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024196143Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024212053Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024236879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024259381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024292 containerd[1451]: time="2026-03-06T01:41:50.024278987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024300237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024320484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024344069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024364687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024392258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024410623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024428737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024443485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024459584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024478219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024503947Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:41:50.024522 containerd[1451]: time="2026-03-06T01:41:50.024529655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024597652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024616728Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024707708Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024734668Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024812022Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024837430Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024854682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024879669Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024892543Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:41:50.028728 containerd[1451]: time="2026-03-06T01:41:50.024905086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.025244781Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.025321213Z" level=info msg="Connect containerd service" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.025371847Z" level=info msg="using legacy CRI server" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.025381676Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.025484388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.026482221Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.026998394Z" level=info msg="Start subscribing containerd event" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.027071431Z" level=info msg="Start recovering state" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.027155468Z" level=info msg="Start event monitor" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.027172720Z" level=info msg="Start snapshots syncer" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.027187097Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.027197787Z" level=info msg="Start streaming server" Mar 6 01:41:50.029051 containerd[1451]: time="2026-03-06T01:41:50.029066235Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:41:50.029540 containerd[1451]: time="2026-03-06T01:41:50.029169608Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:41:50.029540 containerd[1451]: time="2026-03-06T01:41:50.029289101Z" level=info msg="containerd successfully booted in 0.108064s" Mar 6 01:41:50.029857 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:41:50.182641 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:41:50.232382 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:41:50.255458 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:41:50.262163 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:45058.service - OpenSSH per-connection server daemon (10.0.0.1:45058). Mar 6 01:41:50.275168 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:41:50.275421 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:41:50.287006 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:41:50.312454 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:41:50.347847 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:41:50.353262 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:41:50.358922 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:41:50.419059 sshd[1507]: Accepted publickey for core from 10.0.0.1 port 45058 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:50.425845 sshd[1507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:50.439425 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:41:50.452152 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:41:50.460742 systemd-logind[1439]: New session 1 of user core. Mar 6 01:41:50.470888 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:41:50.488368 tar[1445]: linux-amd64/README.md Mar 6 01:41:50.486237 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:41:50.497968 (systemd)[1518]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:41:50.504895 systemd-networkd[1375]: eth0: Gained IPv6LL Mar 6 01:41:50.515829 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:41:50.531479 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:41:50.537992 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:41:50.555339 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:41:50.562161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:41:50.571803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:41:50.614887 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:41:50.615204 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:41:50.619899 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:41:50.628923 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:41:50.737108 systemd[1518]: Queued start job for default target default.target. Mar 6 01:41:50.751843 systemd[1518]: Created slice app.slice - User Application Slice. Mar 6 01:41:50.751903 systemd[1518]: Reached target paths.target - Paths. Mar 6 01:41:50.751923 systemd[1518]: Reached target timers.target - Timers. Mar 6 01:41:50.755962 systemd[1518]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:41:50.775541 systemd[1518]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:41:50.775907 systemd[1518]: Reached target sockets.target - Sockets. Mar 6 01:41:50.775932 systemd[1518]: Reached target basic.target - Basic System. Mar 6 01:41:50.775985 systemd[1518]: Reached target default.target - Main User Target. Mar 6 01:41:50.776037 systemd[1518]: Startup finished in 263ms. Mar 6 01:41:50.776661 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:41:50.797264 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:41:50.886296 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:47950.service - OpenSSH per-connection server daemon (10.0.0.1:47950). Mar 6 01:41:51.012228 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 47950 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:51.024991 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:51.044116 systemd-logind[1439]: New session 2 of user core. Mar 6 01:41:51.065145 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:41:51.210230 sshd[1549]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:51.245439 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:47950.service: Deactivated successfully. Mar 6 01:41:51.251961 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:41:51.255930 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:41:51.278162 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:47960.service - OpenSSH per-connection server daemon (10.0.0.1:47960). Mar 6 01:41:51.294967 systemd-logind[1439]: Removed session 2. Mar 6 01:41:51.552387 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 47960 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:51.555521 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:51.569688 systemd-logind[1439]: New session 3 of user core. Mar 6 01:41:51.583056 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:41:51.696507 kernel: hrtimer: interrupt took 4757452 ns Mar 6 01:41:52.275497 sshd[1556]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:52.288536 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:47960.service: Deactivated successfully. Mar 6 01:41:52.294527 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:41:52.296726 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:41:52.302525 systemd-logind[1439]: Removed session 3. Mar 6 01:41:54.701346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:41:54.708076 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:41:54.712528 systemd[1]: Startup finished in 3.087s (kernel) + 10.141s (initrd) + 9.854s (userspace) = 23.083s. Mar 6 01:41:54.719977 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:41:57.701374 kubelet[1568]: E0306 01:41:57.701010 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:41:57.708991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:41:57.709439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:41:57.710393 systemd[1]: kubelet.service: Consumed 5.725s CPU time. Mar 6 01:42:02.311320 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:59916.service - OpenSSH per-connection server daemon (10.0.0.1:59916). Mar 6 01:42:02.398282 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 59916 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:42:02.401392 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:42:02.411173 systemd-logind[1439]: New session 4 of user core. Mar 6 01:42:02.421169 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:42:02.490843 sshd[1582]: pam_unix(sshd:session): session closed for user core Mar 6 01:42:02.502399 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:59916.service: Deactivated successfully. Mar 6 01:42:02.504939 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:42:02.507534 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:42:02.527518 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:59922.service - OpenSSH per-connection server daemon (10.0.0.1:59922). Mar 6 01:42:02.529229 systemd-logind[1439]: Removed session 4. Mar 6 01:42:02.572347 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 59922 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:42:02.574691 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:42:02.582949 systemd-logind[1439]: New session 5 of user core. Mar 6 01:42:02.592112 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:42:02.648957 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 6 01:42:02.669396 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:59922.service: Deactivated successfully. Mar 6 01:42:02.672303 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:42:02.675422 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:42:02.694380 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:59936.service - OpenSSH per-connection server daemon (10.0.0.1:59936). Mar 6 01:42:02.696005 systemd-logind[1439]: Removed session 5. Mar 6 01:42:02.727355 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 59936 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:42:02.729182 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:42:02.735833 systemd-logind[1439]: New session 6 of user core. Mar 6 01:42:02.750128 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:42:02.821146 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 6 01:42:02.838864 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:59936.service: Deactivated successfully. Mar 6 01:42:02.841370 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:42:02.844046 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:42:02.846183 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:59950.service - OpenSSH per-connection server daemon (10.0.0.1:59950). Mar 6 01:42:02.848015 systemd-logind[1439]: Removed session 6. Mar 6 01:42:02.901644 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 59950 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:42:02.903290 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:42:02.909696 systemd-logind[1439]: New session 7 of user core. Mar 6 01:42:02.924178 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:42:02.995288 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 01:42:02.995745 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:42:03.016140 sudo[1606]: pam_unix(sudo:session): session closed for user root Mar 6 01:42:03.018989 sshd[1603]: pam_unix(sshd:session): session closed for user core Mar 6 01:42:03.033427 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:59950.service: Deactivated successfully. Mar 6 01:42:03.036266 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:42:03.038577 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:42:03.049437 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:59966.service - OpenSSH per-connection server daemon (10.0.0.1:59966). Mar 6 01:42:03.051468 systemd-logind[1439]: Removed session 7. Mar 6 01:42:03.090713 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 59966 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:42:03.093866 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:42:03.100667 systemd-logind[1439]: New session 8 of user core. Mar 6 01:42:03.115069 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:42:03.179673 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 01:42:03.180281 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:42:03.186488 sudo[1615]: pam_unix(sudo:session): session closed for user root Mar 6 01:42:03.196043 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 6 01:42:03.196659 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:42:03.217296 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 6 01:42:03.223012 auditctl[1618]: No rules Mar 6 01:42:03.223582 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 01:42:03.224151 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 6 01:42:03.228658 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:42:03.282034 augenrules[1636]: No rules Mar 6 01:42:03.284428 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:42:03.286282 sudo[1614]: pam_unix(sudo:session): session closed for user root Mar 6 01:42:03.289268 sshd[1611]: pam_unix(sshd:session): session closed for user core Mar 6 01:42:03.303275 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:59966.service: Deactivated successfully. Mar 6 01:42:03.306165 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:42:03.308944 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:42:03.321284 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:59974.service - OpenSSH per-connection server daemon (10.0.0.1:59974). Mar 6 01:42:03.323111 systemd-logind[1439]: Removed session 8. Mar 6 01:42:03.353878 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 59974 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:42:03.356488 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:42:03.364079 systemd-logind[1439]: New session 9 of user core. Mar 6 01:42:03.374124 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:42:03.437098 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:42:03.437726 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:42:03.795257 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:42:03.795418 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:42:06.312686 dockerd[1665]: time="2026-03-06T01:42:06.312127204Z" level=info msg="Starting up" Mar 6 01:42:06.808731 dockerd[1665]: time="2026-03-06T01:42:06.808358533Z" level=info msg="Loading containers: start." Mar 6 01:42:07.067857 kernel: Initializing XFRM netlink socket Mar 6 01:42:07.841155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:42:07.856212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:08.060405 systemd-networkd[1375]: docker0: Link UP Mar 6 01:42:08.148866 dockerd[1665]: time="2026-03-06T01:42:08.148537993Z" level=info msg="Loading containers: done." Mar 6 01:42:08.227850 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck91412995-merged.mount: Deactivated successfully. Mar 6 01:42:08.243306 dockerd[1665]: time="2026-03-06T01:42:08.243119388Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:42:08.244002 dockerd[1665]: time="2026-03-06T01:42:08.243887703Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:42:08.244524 dockerd[1665]: time="2026-03-06T01:42:08.244387495Z" level=info msg="Daemon has completed initialization" Mar 6 01:42:08.543145 dockerd[1665]: time="2026-03-06T01:42:08.541832059Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:42:08.545269 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:42:08.921479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:08.960340 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:42:09.526838 kubelet[1818]: E0306 01:42:09.526023 1818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:42:09.534575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:42:09.535003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:42:09.535490 systemd[1]: kubelet.service: Consumed 1.792s CPU time. Mar 6 01:42:10.049738 containerd[1451]: time="2026-03-06T01:42:10.049160731Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 01:42:10.978702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733190602.mount: Deactivated successfully. Mar 6 01:42:14.938872 containerd[1451]: time="2026-03-06T01:42:14.937462786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:14.938872 containerd[1451]: time="2026-03-06T01:42:14.938813556Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 6 01:42:14.940239 containerd[1451]: time="2026-03-06T01:42:14.940129965Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:14.945924 containerd[1451]: time="2026-03-06T01:42:14.945866034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:14.947979 containerd[1451]: time="2026-03-06T01:42:14.947727609Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 4.898512446s" Mar 6 01:42:14.947979 containerd[1451]: time="2026-03-06T01:42:14.947855749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 01:42:14.951054 containerd[1451]: time="2026-03-06T01:42:14.950873009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 01:42:19.816491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 01:42:19.900855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:20.081249 containerd[1451]: time="2026-03-06T01:42:20.080377081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:20.093310 containerd[1451]: time="2026-03-06T01:42:20.092952047Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 6 01:42:20.116710 containerd[1451]: time="2026-03-06T01:42:20.114468773Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:20.133272 containerd[1451]: time="2026-03-06T01:42:20.133169135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:20.137258 containerd[1451]: time="2026-03-06T01:42:20.135608069Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 5.184665641s" Mar 6 01:42:20.137258 containerd[1451]: time="2026-03-06T01:42:20.135717143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 01:42:20.138540 containerd[1451]: time="2026-03-06T01:42:20.138507722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 01:42:20.303435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:20.376475 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:42:20.801415 kubelet[1899]: E0306 01:42:20.801119 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:42:20.809306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:42:20.809633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:42:20.813201 systemd[1]: kubelet.service: Consumed 1.257s CPU time. Mar 6 01:42:23.787583 containerd[1451]: time="2026-03-06T01:42:23.787317190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:23.793563 containerd[1451]: time="2026-03-06T01:42:23.789853667Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 6 01:42:23.793563 containerd[1451]: time="2026-03-06T01:42:23.793070063Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:23.807319 containerd[1451]: time="2026-03-06T01:42:23.807158466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:23.811248 containerd[1451]: time="2026-03-06T01:42:23.810098904Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 3.671240898s" Mar 6 01:42:23.811248 containerd[1451]: time="2026-03-06T01:42:23.810260043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 01:42:23.813840 containerd[1451]: time="2026-03-06T01:42:23.813632084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 01:42:26.179395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938730940.mount: Deactivated successfully. Mar 6 01:42:27.453855 containerd[1451]: time="2026-03-06T01:42:27.453601386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:27.456379 containerd[1451]: time="2026-03-06T01:42:27.456172256Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 6 01:42:27.457584 containerd[1451]: time="2026-03-06T01:42:27.457489945Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:27.460878 containerd[1451]: time="2026-03-06T01:42:27.460711655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:27.462796 containerd[1451]: time="2026-03-06T01:42:27.461299695Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 3.64759269s" Mar 6 01:42:27.462796 containerd[1451]: time="2026-03-06T01:42:27.461360543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 01:42:27.464869 containerd[1451]: time="2026-03-06T01:42:27.464672357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 01:42:28.018358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614224268.mount: Deactivated successfully. Mar 6 01:42:30.860030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 6 01:42:30.886622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:31.587321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:31.593603 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:42:31.957207 kubelet[1980]: E0306 01:42:31.956512 1980 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:42:31.973296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:42:31.973581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:42:32.696299 containerd[1451]: time="2026-03-06T01:42:32.696145620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:32.697366 containerd[1451]: time="2026-03-06T01:42:32.697248811Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 6 01:42:32.699062 containerd[1451]: time="2026-03-06T01:42:32.698994474Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:32.703991 containerd[1451]: time="2026-03-06T01:42:32.703876616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:32.706383 containerd[1451]: time="2026-03-06T01:42:32.706228409Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.241494623s" Mar 6 01:42:32.706383 containerd[1451]: time="2026-03-06T01:42:32.706335041Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 01:42:32.708479 containerd[1451]: time="2026-03-06T01:42:32.708374333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 01:42:33.516402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132446752.mount: Deactivated successfully. Mar 6 01:42:33.526028 containerd[1451]: time="2026-03-06T01:42:33.525707713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:33.527100 containerd[1451]: time="2026-03-06T01:42:33.527000118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 01:42:33.528981 containerd[1451]: time="2026-03-06T01:42:33.528882446Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:33.532951 containerd[1451]: time="2026-03-06T01:42:33.532861818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:33.534493 containerd[1451]: time="2026-03-06T01:42:33.534381512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 825.977857ms" Mar 6 01:42:33.534493 containerd[1451]: time="2026-03-06T01:42:33.534441580Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 01:42:33.537067 containerd[1451]: time="2026-03-06T01:42:33.536955954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 01:42:34.056184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649164599.mount: Deactivated successfully. Mar 6 01:42:34.895351 update_engine[1442]: I20260306 01:42:34.893662 1442 update_attempter.cc:509] Updating boot flags... Mar 6 01:42:35.026016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2015) Mar 6 01:42:37.419724 containerd[1451]: time="2026-03-06T01:42:37.419246663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:37.421702 containerd[1451]: time="2026-03-06T01:42:37.421425070Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 6 01:42:37.424134 containerd[1451]: time="2026-03-06T01:42:37.423667078Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:37.428877 containerd[1451]: time="2026-03-06T01:42:37.428728503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:37.431300 containerd[1451]: time="2026-03-06T01:42:37.431176608Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.894144897s" Mar 6 01:42:37.431300 containerd[1451]: time="2026-03-06T01:42:37.431255180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 01:42:42.113045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 6 01:42:42.129844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:42.437316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:42.462577 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:42:42.537268 kubelet[2104]: E0306 01:42:42.537200 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:42:42.543403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:42:42.543633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:42:43.690310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:43.709709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:43.761388 systemd[1]: Reloading requested from client PID 2119 ('systemctl') (unit session-9.scope)... Mar 6 01:42:43.761454 systemd[1]: Reloading... Mar 6 01:42:43.889209 zram_generator::config[2164]: No configuration found. Mar 6 01:42:44.040206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:42:44.155501 systemd[1]: Reloading finished in 393 ms. Mar 6 01:42:44.226067 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 01:42:44.226226 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 01:42:44.226554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:44.242623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:44.450446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:44.459329 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:42:44.546152 kubelet[2207]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:42:44.546152 kubelet[2207]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:42:44.546152 kubelet[2207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:42:44.546152 kubelet[2207]: I0306 01:42:44.546145 2207 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:42:44.884258 kubelet[2207]: I0306 01:42:44.884017 2207 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:42:44.884258 kubelet[2207]: I0306 01:42:44.884125 2207 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:42:44.884481 kubelet[2207]: I0306 01:42:44.884454 2207 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:42:44.916916 kubelet[2207]: E0306 01:42:44.916725 2207 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:42:44.919054 kubelet[2207]: I0306 01:42:44.918990 2207 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:42:44.936497 kubelet[2207]: E0306 01:42:44.936395 2207 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:42:44.936497 kubelet[2207]: I0306 01:42:44.936481 2207 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:42:44.950191 kubelet[2207]: I0306 01:42:44.950029 2207 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:42:44.952200 kubelet[2207]: I0306 01:42:44.952120 2207 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:42:44.952855 kubelet[2207]: I0306 01:42:44.952207 2207 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:42:44.953117 kubelet[2207]: I0306 01:42:44.953050 2207 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:42:44.953166 kubelet[2207]: I0306 01:42:44.953119 2207 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:42:44.953649 kubelet[2207]: I0306 01:42:44.953577 2207 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:42:44.959201 kubelet[2207]: I0306 01:42:44.959052 2207 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:42:44.959201 kubelet[2207]: I0306 01:42:44.959181 2207 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:42:44.959667 kubelet[2207]: I0306 01:42:44.959567 2207 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:42:44.960201 kubelet[2207]: I0306 01:42:44.959971 2207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:42:44.963853 kubelet[2207]: I0306 01:42:44.963711 2207 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:42:44.965041 kubelet[2207]: I0306 01:42:44.964708 2207 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:42:44.966123 kubelet[2207]: W0306 01:42:44.966023 2207 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:42:44.972250 kubelet[2207]: E0306 01:42:44.971978 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:42:44.972250 kubelet[2207]: E0306 01:42:44.971894 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:42:44.975873 kubelet[2207]: I0306 01:42:44.975669 2207 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:42:44.975873 kubelet[2207]: I0306 01:42:44.975863 2207 server.go:1289] "Started kubelet" Mar 6 01:42:44.976956 kubelet[2207]: I0306 01:42:44.976514 2207 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:42:44.978571 kubelet[2207]: I0306 01:42:44.977210 2207 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:42:44.978571 kubelet[2207]: I0306 01:42:44.978395 2207 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:42:44.978686 kubelet[2207]: I0306 01:42:44.978662 2207 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:42:44.983831 kubelet[2207]: I0306 01:42:44.981834 2207 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:42:44.987871 kubelet[2207]: E0306 01:42:44.983658 2207 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1d11a6e9e8df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:42:44.975708383 +0000 UTC m=+0.508112179,LastTimestamp:2026-03-06 01:42:44.975708383 +0000 UTC m=+0.508112179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:42:44.987871 kubelet[2207]: I0306 01:42:44.985989 2207 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:42:44.987871 kubelet[2207]: E0306 01:42:44.986702 2207 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:42:44.988186 kubelet[2207]: I0306 01:42:44.988149 2207 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:42:44.988946 kubelet[2207]: E0306 01:42:44.988582 2207 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:42:44.990385 kubelet[2207]: I0306 01:42:44.990294 2207 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:42:44.990691 kubelet[2207]: I0306 01:42:44.990610 2207 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:42:44.994915 kubelet[2207]: E0306 01:42:44.992644 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:42:44.994915 kubelet[2207]: E0306 01:42:44.993347 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Mar 6 01:42:44.994915 kubelet[2207]: I0306 01:42:44.993740 2207 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:42:44.999109 kubelet[2207]: I0306 01:42:44.999023 2207 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:42:44.999109 kubelet[2207]: I0306 01:42:44.999110 2207 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:42:45.029972 kubelet[2207]: I0306 01:42:45.029915 2207 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:42:45.029972 kubelet[2207]: I0306 01:42:45.029970 2207 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:42:45.030140 kubelet[2207]: I0306 01:42:45.030033 2207 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:42:45.038500 kubelet[2207]: I0306 01:42:45.038318 2207 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:42:45.041705 kubelet[2207]: I0306 01:42:45.041670 2207 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:42:45.042625 kubelet[2207]: I0306 01:42:45.042117 2207 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:42:45.042625 kubelet[2207]: I0306 01:42:45.042244 2207 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:42:45.042625 kubelet[2207]: I0306 01:42:45.042335 2207 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:42:45.042625 kubelet[2207]: E0306 01:42:45.042410 2207 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:42:45.043126 kubelet[2207]: E0306 01:42:45.043033 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:42:45.089656 kubelet[2207]: E0306 01:42:45.089481 2207 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:42:45.110209 kubelet[2207]: I0306 01:42:45.109948 2207 policy_none.go:49] "None policy: Start" Mar 6 01:42:45.110209 kubelet[2207]: I0306 01:42:45.110087 2207 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:42:45.110209 kubelet[2207]: I0306 01:42:45.110135 2207 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:42:45.124110 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 01:42:45.145218 kubelet[2207]: E0306 01:42:45.142898 2207 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 01:42:45.150682 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 01:42:45.155589 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 01:42:45.170706 kubelet[2207]: E0306 01:42:45.170646 2207 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:42:45.171600 kubelet[2207]: I0306 01:42:45.171408 2207 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:42:45.171600 kubelet[2207]: I0306 01:42:45.171493 2207 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:42:45.173241 kubelet[2207]: I0306 01:42:45.173203 2207 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:42:45.173730 kubelet[2207]: E0306 01:42:45.173633 2207 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:42:45.174647 kubelet[2207]: E0306 01:42:45.173925 2207 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:42:45.194876 kubelet[2207]: E0306 01:42:45.194521 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Mar 6 01:42:45.274181 kubelet[2207]: I0306 01:42:45.274016 2207 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:42:45.274886 kubelet[2207]: E0306 01:42:45.274745 2207 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 6 01:42:45.363882 systemd[1]: Created slice kubepods-burstable-pod372be3c59cd2c5497ad1311a6c29d8a9.slice - libcontainer container kubepods-burstable-pod372be3c59cd2c5497ad1311a6c29d8a9.slice. Mar 6 01:42:45.385705 kubelet[2207]: E0306 01:42:45.385560 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:45.390536 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 6 01:42:45.391819 kubelet[2207]: I0306 01:42:45.391615 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:45.391819 kubelet[2207]: I0306 01:42:45.391675 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:45.391819 kubelet[2207]: I0306 01:42:45.391710 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:45.392139 kubelet[2207]: I0306 01:42:45.391831 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:45.392139 kubelet[2207]: I0306 01:42:45.391861 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/372be3c59cd2c5497ad1311a6c29d8a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"372be3c59cd2c5497ad1311a6c29d8a9\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:45.392139 kubelet[2207]: I0306 01:42:45.391887 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/372be3c59cd2c5497ad1311a6c29d8a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"372be3c59cd2c5497ad1311a6c29d8a9\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:45.392139 kubelet[2207]: I0306 01:42:45.391907 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:45.392139 kubelet[2207]: I0306 01:42:45.391929 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:45.392277 kubelet[2207]: I0306 01:42:45.391949 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/372be3c59cd2c5497ad1311a6c29d8a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"372be3c59cd2c5497ad1311a6c29d8a9\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:45.396033 kubelet[2207]: E0306 01:42:45.395895 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:45.400313 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 6 01:42:45.403231 kubelet[2207]: E0306 01:42:45.403150 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:45.478170 kubelet[2207]: I0306 01:42:45.478017 2207 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:42:45.478974 kubelet[2207]: E0306 01:42:45.478882 2207 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 6 01:42:45.595906 kubelet[2207]: E0306 01:42:45.595499 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Mar 6 01:42:45.687894 kubelet[2207]: E0306 01:42:45.687659 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:45.689448 containerd[1451]: time="2026-03-06T01:42:45.689301361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:372be3c59cd2c5497ad1311a6c29d8a9,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:45.696851 kubelet[2207]: E0306 01:42:45.696631 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:45.697563 containerd[1451]: time="2026-03-06T01:42:45.697490746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:45.704562 kubelet[2207]: E0306 01:42:45.704463 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:45.705357 containerd[1451]: time="2026-03-06T01:42:45.705284064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:45.881597 kubelet[2207]: I0306 01:42:45.881537 2207 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:42:45.882192 kubelet[2207]: E0306 01:42:45.881972 2207 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 6 01:42:46.135676 kubelet[2207]: E0306 01:42:46.135332 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:42:46.138300 kubelet[2207]: E0306 01:42:46.138227 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:42:46.145340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226858676.mount: Deactivated successfully. Mar 6 01:42:46.156878 containerd[1451]: time="2026-03-06T01:42:46.156670133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:46.161812 containerd[1451]: time="2026-03-06T01:42:46.161680301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:42:46.163197 containerd[1451]: time="2026-03-06T01:42:46.163097218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:46.164388 containerd[1451]: time="2026-03-06T01:42:46.164327269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:42:46.165473 containerd[1451]: time="2026-03-06T01:42:46.165410755Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:46.166899 containerd[1451]: time="2026-03-06T01:42:46.166863320Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:46.168169 containerd[1451]: time="2026-03-06T01:42:46.168081135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:42:46.170565 containerd[1451]: time="2026-03-06T01:42:46.170499008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:46.172446 containerd[1451]: time="2026-03-06T01:42:46.172367912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.961233ms" Mar 6 01:42:46.174681 containerd[1451]: time="2026-03-06T01:42:46.174607879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.089249ms" Mar 6 01:42:46.178586 containerd[1451]: time="2026-03-06T01:42:46.178497308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.881653ms" Mar 6 01:42:46.220182 kubelet[2207]: E0306 01:42:46.217529 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:42:46.222507 kubelet[2207]: E0306 01:42:46.222463 2207 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:42:46.327845 containerd[1451]: time="2026-03-06T01:42:46.326463514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:46.327845 containerd[1451]: time="2026-03-06T01:42:46.326690552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:46.327845 containerd[1451]: time="2026-03-06T01:42:46.326713674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:46.327845 containerd[1451]: time="2026-03-06T01:42:46.327124650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:46.327845 containerd[1451]: time="2026-03-06T01:42:46.327540180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:46.328091 containerd[1451]: time="2026-03-06T01:42:46.327686710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:46.328091 containerd[1451]: time="2026-03-06T01:42:46.327859688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:46.328140 containerd[1451]: time="2026-03-06T01:42:46.327983495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:46.337692 containerd[1451]: time="2026-03-06T01:42:46.337266236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:46.337692 containerd[1451]: time="2026-03-06T01:42:46.337339119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:46.337692 containerd[1451]: time="2026-03-06T01:42:46.337362382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:46.337692 containerd[1451]: time="2026-03-06T01:42:46.337475841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:46.373943 systemd[1]: Started cri-containerd-181f48b71c1c53864841379788e2e69eea526833e35073387d244cf857ff7195.scope - libcontainer container 181f48b71c1c53864841379788e2e69eea526833e35073387d244cf857ff7195. Mar 6 01:42:46.390865 systemd[1]: Started cri-containerd-9af5aa604f766f4432558178016a7481acba6dbb748ccd1026159618a85d8bd1.scope - libcontainer container 9af5aa604f766f4432558178016a7481acba6dbb748ccd1026159618a85d8bd1. Mar 6 01:42:46.396341 systemd[1]: Started cri-containerd-5f41a04e9ffa94298f506e826edf28eb89f7532c841bbda741b2b7f540d7f5ca.scope - libcontainer container 5f41a04e9ffa94298f506e826edf28eb89f7532c841bbda741b2b7f540d7f5ca. Mar 6 01:42:46.398876 kubelet[2207]: E0306 01:42:46.396991 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="1.6s" Mar 6 01:42:46.474864 containerd[1451]: time="2026-03-06T01:42:46.474248262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:372be3c59cd2c5497ad1311a6c29d8a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"181f48b71c1c53864841379788e2e69eea526833e35073387d244cf857ff7195\"" Mar 6 01:42:46.476407 kubelet[2207]: E0306 01:42:46.476269 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:46.511211 containerd[1451]: time="2026-03-06T01:42:46.511105170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f41a04e9ffa94298f506e826edf28eb89f7532c841bbda741b2b7f540d7f5ca\"" Mar 6 01:42:46.514231 containerd[1451]: time="2026-03-06T01:42:46.514089532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af5aa604f766f4432558178016a7481acba6dbb748ccd1026159618a85d8bd1\"" Mar 6 01:42:46.515377 kubelet[2207]: E0306 01:42:46.515177 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:46.517382 kubelet[2207]: E0306 01:42:46.516867 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:46.531871 containerd[1451]: time="2026-03-06T01:42:46.531675344Z" level=info msg="CreateContainer within sandbox \"181f48b71c1c53864841379788e2e69eea526833e35073387d244cf857ff7195\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:42:46.535201 containerd[1451]: time="2026-03-06T01:42:46.535096711Z" level=info msg="CreateContainer within sandbox \"5f41a04e9ffa94298f506e826edf28eb89f7532c841bbda741b2b7f540d7f5ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:42:46.540669 containerd[1451]: time="2026-03-06T01:42:46.540563571Z" level=info msg="CreateContainer within sandbox \"9af5aa604f766f4432558178016a7481acba6dbb748ccd1026159618a85d8bd1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:42:46.566149 containerd[1451]: time="2026-03-06T01:42:46.565988912Z" level=info msg="CreateContainer within sandbox \"181f48b71c1c53864841379788e2e69eea526833e35073387d244cf857ff7195\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2d4b964b6d7ddaf1761aa232d10b28dac5e9a815e0f0dbb8ab1c8a0bac8b325d\"" Mar 6 01:42:46.567888 containerd[1451]: time="2026-03-06T01:42:46.567836428Z" level=info msg="StartContainer for \"2d4b964b6d7ddaf1761aa232d10b28dac5e9a815e0f0dbb8ab1c8a0bac8b325d\"" Mar 6 01:42:46.575621 containerd[1451]: time="2026-03-06T01:42:46.575572927Z" level=info msg="CreateContainer within sandbox \"5f41a04e9ffa94298f506e826edf28eb89f7532c841bbda741b2b7f540d7f5ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c8ae27235c6e31b207d22132e8bb00142c9da095e783992ab697e4bb429eacf\"" Mar 6 01:42:46.578407 containerd[1451]: time="2026-03-06T01:42:46.576989727Z" level=info msg="StartContainer for \"2c8ae27235c6e31b207d22132e8bb00142c9da095e783992ab697e4bb429eacf\"" Mar 6 01:42:46.582192 containerd[1451]: time="2026-03-06T01:42:46.581999479Z" level=info msg="CreateContainer within sandbox \"9af5aa604f766f4432558178016a7481acba6dbb748ccd1026159618a85d8bd1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e38fe6a928a00796a06b047394e7075385a4a2e1c3447ed5fd224a44592ebd9d\"" Mar 6 01:42:46.584441 containerd[1451]: time="2026-03-06T01:42:46.584281063Z" level=info msg="StartContainer for \"e38fe6a928a00796a06b047394e7075385a4a2e1c3447ed5fd224a44592ebd9d\"" Mar 6 01:42:46.635112 systemd[1]: Started cri-containerd-2d4b964b6d7ddaf1761aa232d10b28dac5e9a815e0f0dbb8ab1c8a0bac8b325d.scope - libcontainer container 2d4b964b6d7ddaf1761aa232d10b28dac5e9a815e0f0dbb8ab1c8a0bac8b325d. Mar 6 01:42:46.642924 systemd[1]: Started cri-containerd-2c8ae27235c6e31b207d22132e8bb00142c9da095e783992ab697e4bb429eacf.scope - libcontainer container 2c8ae27235c6e31b207d22132e8bb00142c9da095e783992ab697e4bb429eacf. Mar 6 01:42:46.646246 systemd[1]: Started cri-containerd-e38fe6a928a00796a06b047394e7075385a4a2e1c3447ed5fd224a44592ebd9d.scope - libcontainer container e38fe6a928a00796a06b047394e7075385a4a2e1c3447ed5fd224a44592ebd9d. Mar 6 01:42:46.686861 kubelet[2207]: I0306 01:42:46.686735 2207 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:42:46.687663 kubelet[2207]: E0306 01:42:46.687183 2207 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 6 01:42:46.703854 containerd[1451]: time="2026-03-06T01:42:46.703461445Z" level=info msg="StartContainer for \"2d4b964b6d7ddaf1761aa232d10b28dac5e9a815e0f0dbb8ab1c8a0bac8b325d\" returns successfully" Mar 6 01:42:46.739663 containerd[1451]: time="2026-03-06T01:42:46.738889028Z" level=info msg="StartContainer for \"2c8ae27235c6e31b207d22132e8bb00142c9da095e783992ab697e4bb429eacf\" returns successfully" Mar 6 01:42:46.743232 containerd[1451]: time="2026-03-06T01:42:46.743136647Z" level=info msg="StartContainer for \"e38fe6a928a00796a06b047394e7075385a4a2e1c3447ed5fd224a44592ebd9d\" returns successfully" Mar 6 01:42:47.054309 kubelet[2207]: E0306 01:42:47.053858 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:47.054309 kubelet[2207]: E0306 01:42:47.053973 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:47.058613 kubelet[2207]: E0306 01:42:47.058597 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:47.058881 kubelet[2207]: E0306 01:42:47.058865 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:47.062858 kubelet[2207]: E0306 01:42:47.062730 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:47.063106 kubelet[2207]: E0306 01:42:47.062983 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:48.069397 kubelet[2207]: E0306 01:42:48.069317 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:48.072025 kubelet[2207]: E0306 01:42:48.069513 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:48.072025 kubelet[2207]: E0306 01:42:48.070468 2207 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:48.072025 kubelet[2207]: E0306 01:42:48.071079 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:48.291112 kubelet[2207]: I0306 01:42:48.291074 2207 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:42:48.751674 kubelet[2207]: E0306 01:42:48.751580 2207 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 01:42:48.842155 kubelet[2207]: I0306 01:42:48.842049 2207 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:42:48.842155 kubelet[2207]: E0306 01:42:48.842156 2207 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 01:42:48.891121 kubelet[2207]: I0306 01:42:48.891046 2207 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:48.903661 kubelet[2207]: E0306 01:42:48.903556 2207 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:48.903847 kubelet[2207]: I0306 01:42:48.903682 2207 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:48.907445 kubelet[2207]: E0306 01:42:48.907302 2207 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:48.907445 kubelet[2207]: I0306 01:42:48.907321 2207 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:48.909930 kubelet[2207]: E0306 01:42:48.909863 2207 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:48.964567 kubelet[2207]: I0306 01:42:48.964482 2207 apiserver.go:52] "Watching apiserver" Mar 6 01:42:48.991159 kubelet[2207]: I0306 01:42:48.990884 2207 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:42:49.310741 kubelet[2207]: I0306 01:42:49.310648 2207 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:49.313179 kubelet[2207]: E0306 01:42:49.313114 2207 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:49.313367 kubelet[2207]: E0306 01:42:49.313330 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:50.929443 systemd[1]: Reloading requested from client PID 2500 ('systemctl') (unit session-9.scope)... Mar 6 01:42:50.929483 systemd[1]: Reloading... Mar 6 01:42:51.045952 zram_generator::config[2539]: No configuration found. Mar 6 01:42:51.161260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:42:51.251336 systemd[1]: Reloading finished in 321 ms. Mar 6 01:42:51.303010 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:51.317380 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:42:51.317905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:51.318034 systemd[1]: kubelet.service: Consumed 1.593s CPU time, 134.0M memory peak, 0B memory swap peak. Mar 6 01:42:51.328409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:51.496076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:51.502950 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:42:51.566661 kubelet[2584]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:42:51.566661 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:42:51.566661 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:42:51.567216 kubelet[2584]: I0306 01:42:51.566681 2584 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:42:51.572938 kubelet[2584]: I0306 01:42:51.572889 2584 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:42:51.572938 kubelet[2584]: I0306 01:42:51.572939 2584 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:42:51.573166 kubelet[2584]: I0306 01:42:51.573121 2584 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:42:51.574351 kubelet[2584]: I0306 01:42:51.574264 2584 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:42:51.578843 kubelet[2584]: I0306 01:42:51.578616 2584 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:42:51.582836 kubelet[2584]: E0306 01:42:51.582794 2584 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:42:51.582836 kubelet[2584]: I0306 01:42:51.582817 2584 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:42:51.588559 kubelet[2584]: I0306 01:42:51.588489 2584 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:42:51.589011 kubelet[2584]: I0306 01:42:51.588896 2584 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:42:51.589091 kubelet[2584]: I0306 01:42:51.588965 2584 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:42:51.589091 kubelet[2584]: I0306 01:42:51.589080 2584 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:42:51.589091 kubelet[2584]: I0306 01:42:51.589089 2584 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:42:51.589301 kubelet[2584]: I0306 01:42:51.589138 2584 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:42:51.589346 kubelet[2584]: I0306 01:42:51.589332 2584 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:42:51.589400 kubelet[2584]: I0306 01:42:51.589348 2584 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:42:51.589400 kubelet[2584]: I0306 01:42:51.589370 2584 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:42:51.589469 kubelet[2584]: I0306 01:42:51.589441 2584 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:42:51.591725 kubelet[2584]: I0306 01:42:51.591687 2584 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:42:51.593694 kubelet[2584]: I0306 01:42:51.593556 2584 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:42:51.604643 kubelet[2584]: I0306 01:42:51.604580 2584 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:42:51.604705 kubelet[2584]: I0306 01:42:51.604661 2584 server.go:1289] "Started kubelet" Mar 6 01:42:51.605858 kubelet[2584]: I0306 01:42:51.605429 2584 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:42:51.606178 kubelet[2584]: I0306 01:42:51.606021 2584 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:42:51.606607 kubelet[2584]: I0306 01:42:51.606500 2584 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:42:51.608872 kubelet[2584]: I0306 01:42:51.608724 2584 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:42:51.609252 kubelet[2584]: I0306 01:42:51.609170 2584 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:42:51.612656 kubelet[2584]: I0306 01:42:51.610050 2584 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:42:51.612656 kubelet[2584]: I0306 01:42:51.611133 2584 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:42:51.612656 kubelet[2584]: I0306 01:42:51.611301 2584 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:42:51.612656 kubelet[2584]: I0306 01:42:51.611424 2584 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:42:51.614020 kubelet[2584]: I0306 01:42:51.613990 2584 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:42:51.614181 kubelet[2584]: I0306 01:42:51.614084 2584 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:42:51.614705 kubelet[2584]: E0306 01:42:51.614645 2584 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:42:51.617451 kubelet[2584]: I0306 01:42:51.617416 2584 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:42:51.643603 kubelet[2584]: I0306 01:42:51.643203 2584 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:42:51.646962 kubelet[2584]: I0306 01:42:51.646865 2584 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:42:51.646962 kubelet[2584]: I0306 01:42:51.646948 2584 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:42:51.647080 kubelet[2584]: I0306 01:42:51.646975 2584 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:42:51.647080 kubelet[2584]: I0306 01:42:51.646987 2584 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:42:51.647162 kubelet[2584]: E0306 01:42:51.647045 2584 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:42:51.669942 kubelet[2584]: I0306 01:42:51.669710 2584 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:42:51.669942 kubelet[2584]: I0306 01:42:51.669743 2584 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:42:51.669942 kubelet[2584]: I0306 01:42:51.669820 2584 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:42:51.670138 kubelet[2584]: I0306 01:42:51.669975 2584 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 01:42:51.670138 kubelet[2584]: I0306 01:42:51.669985 2584 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 01:42:51.670138 kubelet[2584]: I0306 01:42:51.669999 2584 policy_none.go:49] "None policy: Start" Mar 6 01:42:51.670138 kubelet[2584]: I0306 01:42:51.670009 2584 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:42:51.670138 kubelet[2584]: I0306 01:42:51.670019 2584 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:42:51.670138 kubelet[2584]: I0306 01:42:51.670104 2584 state_mem.go:75] "Updated machine memory state" Mar 6 01:42:51.676100 kubelet[2584]: E0306 01:42:51.675700 2584 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:42:51.676305 kubelet[2584]: I0306 01:42:51.676285 2584 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:42:51.676495 kubelet[2584]: I0306 01:42:51.676455 2584 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:42:51.676881 kubelet[2584]: I0306 01:42:51.676735 2584 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:42:51.677991 kubelet[2584]: E0306 01:42:51.677964 2584 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:42:51.748674 kubelet[2584]: I0306 01:42:51.748619 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:51.748674 kubelet[2584]: I0306 01:42:51.748643 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:51.749001 kubelet[2584]: I0306 01:42:51.748939 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:51.787593 kubelet[2584]: I0306 01:42:51.787410 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:42:51.800139 kubelet[2584]: I0306 01:42:51.800106 2584 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 01:42:51.800421 kubelet[2584]: I0306 01:42:51.800357 2584 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:42:51.812729 kubelet[2584]: I0306 01:42:51.812671 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/372be3c59cd2c5497ad1311a6c29d8a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"372be3c59cd2c5497ad1311a6c29d8a9\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:51.913720 kubelet[2584]: I0306 01:42:51.913621 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/372be3c59cd2c5497ad1311a6c29d8a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"372be3c59cd2c5497ad1311a6c29d8a9\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:51.913720 kubelet[2584]: I0306 01:42:51.913689 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:51.913720 kubelet[2584]: I0306 01:42:51.913717 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:51.914015 kubelet[2584]: I0306 01:42:51.913739 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:51.914015 kubelet[2584]: I0306 01:42:51.913944 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:51.914015 kubelet[2584]: I0306 01:42:51.913969 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:51.914015 kubelet[2584]: I0306 01:42:51.913988 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:51.914015 kubelet[2584]: I0306 01:42:51.914007 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/372be3c59cd2c5497ad1311a6c29d8a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"372be3c59cd2c5497ad1311a6c29d8a9\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:52.055303 kubelet[2584]: E0306 01:42:52.055103 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:52.057382 kubelet[2584]: E0306 01:42:52.057294 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:52.057465 kubelet[2584]: E0306 01:42:52.057321 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:52.594874 kubelet[2584]: I0306 01:42:52.593353 2584 apiserver.go:52] "Watching apiserver" Mar 6 01:42:52.710316 kubelet[2584]: I0306 01:42:52.703568 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:52.712456 kubelet[2584]: I0306 01:42:52.711545 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:52.712456 kubelet[2584]: I0306 01:42:52.711625 2584 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:42:52.713299 kubelet[2584]: E0306 01:42:52.712660 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:53.007580 kubelet[2584]: E0306 01:42:53.007418 2584 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:53.018184 kubelet[2584]: E0306 01:42:53.017220 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:53.018184 kubelet[2584]: E0306 01:42:53.017546 2584 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:53.025646 kubelet[2584]: E0306 01:42:53.025508 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:53.059270 kubelet[2584]: I0306 01:42:53.045476 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.045229149 podStartE2EDuration="2.045229149s" podCreationTimestamp="2026-03-06 01:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:42:53.035654151 +0000 UTC m=+1.513133394" watchObservedRunningTime="2026-03-06 01:42:53.045229149 +0000 UTC m=+1.522708342" Mar 6 01:42:53.216282 kubelet[2584]: I0306 01:42:53.213999 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.212584262 podStartE2EDuration="2.212584262s" podCreationTimestamp="2026-03-06 01:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:42:53.199387018 +0000 UTC m=+1.676866211" watchObservedRunningTime="2026-03-06 01:42:53.212584262 +0000 UTC m=+1.690063455" Mar 6 01:42:53.717848 kubelet[2584]: E0306 01:42:53.717628 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:53.720266 kubelet[2584]: E0306 01:42:53.719180 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:56.203483 kubelet[2584]: E0306 01:42:56.201452 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:56.488716 kubelet[2584]: I0306 01:42:56.485966 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.485919949 podStartE2EDuration="5.485919949s" podCreationTimestamp="2026-03-06 01:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:42:53.226487695 +0000 UTC m=+1.703966888" watchObservedRunningTime="2026-03-06 01:42:56.485919949 +0000 UTC m=+4.963399143" Mar 6 01:42:57.032405 kubelet[2584]: E0306 01:42:57.032141 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:57.998134 kubelet[2584]: I0306 01:42:57.997997 2584 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:42:57.998939 kubelet[2584]: I0306 01:42:57.998698 2584 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:42:57.998973 containerd[1451]: time="2026-03-06T01:42:57.998483742Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:42:58.034954 kubelet[2584]: E0306 01:42:58.034904 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:58.070607 kubelet[2584]: E0306 01:42:58.070440 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:58.414271 kubelet[2584]: E0306 01:42:58.414194 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:59.040390 kubelet[2584]: E0306 01:42:59.038617 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:59.040390 kubelet[2584]: E0306 01:42:59.039104 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:59.111324 systemd[1]: Created slice kubepods-besteffort-pod57125af2_8b92_46a2_a238_7d4951749fbf.slice - libcontainer container kubepods-besteffort-pod57125af2_8b92_46a2_a238_7d4951749fbf.slice. Mar 6 01:42:59.189896 systemd[1]: Created slice kubepods-besteffort-pod9ba0d37e_a3ab_4148_894b_7466b1f922ef.slice - libcontainer container kubepods-besteffort-pod9ba0d37e_a3ab_4148_894b_7466b1f922ef.slice. Mar 6 01:42:59.214050 kubelet[2584]: I0306 01:42:59.213967 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57125af2-8b92-46a2-a238-7d4951749fbf-kube-proxy\") pod \"kube-proxy-xhzt7\" (UID: \"57125af2-8b92-46a2-a238-7d4951749fbf\") " pod="kube-system/kube-proxy-xhzt7" Mar 6 01:42:59.214213 kubelet[2584]: I0306 01:42:59.214134 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57125af2-8b92-46a2-a238-7d4951749fbf-xtables-lock\") pod \"kube-proxy-xhzt7\" (UID: \"57125af2-8b92-46a2-a238-7d4951749fbf\") " pod="kube-system/kube-proxy-xhzt7" Mar 6 01:42:59.214213 kubelet[2584]: I0306 01:42:59.214171 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vhbk\" (UniqueName: \"kubernetes.io/projected/57125af2-8b92-46a2-a238-7d4951749fbf-kube-api-access-2vhbk\") pod \"kube-proxy-xhzt7\" (UID: \"57125af2-8b92-46a2-a238-7d4951749fbf\") " pod="kube-system/kube-proxy-xhzt7" Mar 6 01:42:59.214302 kubelet[2584]: I0306 01:42:59.214209 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57125af2-8b92-46a2-a238-7d4951749fbf-lib-modules\") pod \"kube-proxy-xhzt7\" (UID: \"57125af2-8b92-46a2-a238-7d4951749fbf\") " pod="kube-system/kube-proxy-xhzt7" Mar 6 01:42:59.315239 kubelet[2584]: I0306 01:42:59.314938 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ba0d37e-a3ab-4148-894b-7466b1f922ef-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-ffb6p\" (UID: \"9ba0d37e-a3ab-4148-894b-7466b1f922ef\") " pod="tigera-operator/tigera-operator-6bf85f8dd-ffb6p" Mar 6 01:42:59.315239 kubelet[2584]: I0306 01:42:59.314986 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2gtp\" (UniqueName: \"kubernetes.io/projected/9ba0d37e-a3ab-4148-894b-7466b1f922ef-kube-api-access-m2gtp\") pod \"tigera-operator-6bf85f8dd-ffb6p\" (UID: \"9ba0d37e-a3ab-4148-894b-7466b1f922ef\") " pod="tigera-operator/tigera-operator-6bf85f8dd-ffb6p" Mar 6 01:42:59.426197 kubelet[2584]: E0306 01:42:59.426131 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:59.428093 containerd[1451]: time="2026-03-06T01:42:59.426979921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xhzt7,Uid:57125af2-8b92-46a2-a238-7d4951749fbf,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:59.509010 containerd[1451]: time="2026-03-06T01:42:59.508691396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-ffb6p,Uid:9ba0d37e-a3ab-4148-894b-7466b1f922ef,Namespace:tigera-operator,Attempt:0,}" Mar 6 01:42:59.597554 containerd[1451]: time="2026-03-06T01:42:59.597300583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:59.597554 containerd[1451]: time="2026-03-06T01:42:59.597373638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:59.597892 containerd[1451]: time="2026-03-06T01:42:59.597837778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:59.600311 containerd[1451]: time="2026-03-06T01:42:59.600253638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:59.610674 containerd[1451]: time="2026-03-06T01:42:59.610605815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:59.613133 containerd[1451]: time="2026-03-06T01:42:59.613037605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:59.613437 containerd[1451]: time="2026-03-06T01:42:59.613119947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:59.613626 containerd[1451]: time="2026-03-06T01:42:59.613407931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:59.639200 systemd[1]: Started cri-containerd-1fba9d940872de022b12f6bda85598642096032ac5a88c47488a89b0d6ab54a4.scope - libcontainer container 1fba9d940872de022b12f6bda85598642096032ac5a88c47488a89b0d6ab54a4. Mar 6 01:42:59.665744 systemd[1]: Started cri-containerd-8ead5ff1f08ed35a039f02787c017a8f710b8e2c5eb82403a321b639d3589c96.scope - libcontainer container 8ead5ff1f08ed35a039f02787c017a8f710b8e2c5eb82403a321b639d3589c96. Mar 6 01:42:59.919636 containerd[1451]: time="2026-03-06T01:42:59.919591411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xhzt7,Uid:57125af2-8b92-46a2-a238-7d4951749fbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fba9d940872de022b12f6bda85598642096032ac5a88c47488a89b0d6ab54a4\"" Mar 6 01:42:59.925852 kubelet[2584]: E0306 01:42:59.923993 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:59.933694 containerd[1451]: time="2026-03-06T01:42:59.932856869Z" level=info msg="CreateContainer within sandbox \"1fba9d940872de022b12f6bda85598642096032ac5a88c47488a89b0d6ab54a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:42:59.935572 containerd[1451]: time="2026-03-06T01:42:59.935067459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-ffb6p,Uid:9ba0d37e-a3ab-4148-894b-7466b1f922ef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8ead5ff1f08ed35a039f02787c017a8f710b8e2c5eb82403a321b639d3589c96\"" Mar 6 01:42:59.937894 containerd[1451]: time="2026-03-06T01:42:59.936933118Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 01:42:59.955924 containerd[1451]: time="2026-03-06T01:42:59.955674967Z" level=info msg="CreateContainer within sandbox \"1fba9d940872de022b12f6bda85598642096032ac5a88c47488a89b0d6ab54a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83bd31e6ca9deb89d781e7027460b4efa6d826f5d759c3a9973afbd0fd7dbf0a\"" Mar 6 01:42:59.957242 containerd[1451]: time="2026-03-06T01:42:59.957062190Z" level=info msg="StartContainer for \"83bd31e6ca9deb89d781e7027460b4efa6d826f5d759c3a9973afbd0fd7dbf0a\"" Mar 6 01:43:00.003612 systemd[1]: Started cri-containerd-83bd31e6ca9deb89d781e7027460b4efa6d826f5d759c3a9973afbd0fd7dbf0a.scope - libcontainer container 83bd31e6ca9deb89d781e7027460b4efa6d826f5d759c3a9973afbd0fd7dbf0a. Mar 6 01:43:00.091449 kubelet[2584]: E0306 01:43:00.091247 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:00.188307 containerd[1451]: time="2026-03-06T01:43:00.188131136Z" level=info msg="StartContainer for \"83bd31e6ca9deb89d781e7027460b4efa6d826f5d759c3a9973afbd0fd7dbf0a\" returns successfully" Mar 6 01:43:01.093779 kubelet[2584]: E0306 01:43:01.093682 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:01.107931 kubelet[2584]: I0306 01:43:01.107846 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xhzt7" podStartSLOduration=2.106129004 podStartE2EDuration="2.106129004s" podCreationTimestamp="2026-03-06 01:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:43:01.105947147 +0000 UTC m=+9.583426350" watchObservedRunningTime="2026-03-06 01:43:01.106129004 +0000 UTC m=+9.583608198" Mar 6 01:43:01.391580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476465424.mount: Deactivated successfully. Mar 6 01:43:02.151166 kubelet[2584]: E0306 01:43:02.151084 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:05.726644 containerd[1451]: time="2026-03-06T01:43:05.726541217Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:05.728009 containerd[1451]: time="2026-03-06T01:43:05.727934276Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 01:43:05.729670 containerd[1451]: time="2026-03-06T01:43:05.729601124Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:05.733455 containerd[1451]: time="2026-03-06T01:43:05.733365984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:05.734822 containerd[1451]: time="2026-03-06T01:43:05.734464356Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.797499852s" Mar 6 01:43:05.734822 containerd[1451]: time="2026-03-06T01:43:05.734551098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 01:43:05.745060 containerd[1451]: time="2026-03-06T01:43:05.744971188Z" level=info msg="CreateContainer within sandbox \"8ead5ff1f08ed35a039f02787c017a8f710b8e2c5eb82403a321b639d3589c96\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 01:43:05.820504 containerd[1451]: time="2026-03-06T01:43:05.820388184Z" level=info msg="CreateContainer within sandbox \"8ead5ff1f08ed35a039f02787c017a8f710b8e2c5eb82403a321b639d3589c96\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f24ea887164c60b6d3b2bf3514d6a44da653123de441de54029b07bb8d0e2f2c\"" Mar 6 01:43:05.821470 containerd[1451]: time="2026-03-06T01:43:05.821306651Z" level=info msg="StartContainer for \"f24ea887164c60b6d3b2bf3514d6a44da653123de441de54029b07bb8d0e2f2c\"" Mar 6 01:43:05.878002 systemd[1]: Started cri-containerd-f24ea887164c60b6d3b2bf3514d6a44da653123de441de54029b07bb8d0e2f2c.scope - libcontainer container f24ea887164c60b6d3b2bf3514d6a44da653123de441de54029b07bb8d0e2f2c. Mar 6 01:43:05.983374 containerd[1451]: time="2026-03-06T01:43:05.983234096Z" level=info msg="StartContainer for \"f24ea887164c60b6d3b2bf3514d6a44da653123de441de54029b07bb8d0e2f2c\" returns successfully" Mar 6 01:43:06.176368 kubelet[2584]: I0306 01:43:06.176210 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-ffb6p" podStartSLOduration=1.3767208389999999 podStartE2EDuration="7.176193792s" podCreationTimestamp="2026-03-06 01:42:59 +0000 UTC" firstStartedPulling="2026-03-06 01:42:59.936404718 +0000 UTC m=+8.413883911" lastFinishedPulling="2026-03-06 01:43:05.735877671 +0000 UTC m=+14.213356864" observedRunningTime="2026-03-06 01:43:06.175951837 +0000 UTC m=+14.653431080" watchObservedRunningTime="2026-03-06 01:43:06.176193792 +0000 UTC m=+14.653672985" Mar 6 01:43:11.780518 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 6 01:43:11.788966 sshd[1644]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:11.793309 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:59974.service: Deactivated successfully. Mar 6 01:43:11.800596 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:43:11.802093 systemd[1]: session-9.scope: Consumed 14.470s CPU time, 163.3M memory peak, 0B memory swap peak. Mar 6 01:43:11.805560 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:43:11.807704 systemd-logind[1439]: Removed session 9. Mar 6 01:43:14.035589 systemd[1]: Created slice kubepods-besteffort-pod685b682f_7349_42dd_9e38_fb530efcf4fa.slice - libcontainer container kubepods-besteffort-pod685b682f_7349_42dd_9e38_fb530efcf4fa.slice. Mar 6 01:43:14.119658 systemd[1]: Created slice kubepods-besteffort-pod2856abd4_6b35_4d2e_a7e7_ba7c43f7199d.slice - libcontainer container kubepods-besteffort-pod2856abd4_6b35_4d2e_a7e7_ba7c43f7199d.slice. Mar 6 01:43:14.153510 kubelet[2584]: I0306 01:43:14.153392 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-lib-modules\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.153510 kubelet[2584]: I0306 01:43:14.153473 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-var-lib-calico\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154236 kubelet[2584]: I0306 01:43:14.153560 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbff\" (UniqueName: \"kubernetes.io/projected/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-kube-api-access-7gbff\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154236 kubelet[2584]: I0306 01:43:14.153660 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/685b682f-7349-42dd-9e38-fb530efcf4fa-tigera-ca-bundle\") pod \"calico-typha-789465fdc5-wqqrp\" (UID: \"685b682f-7349-42dd-9e38-fb530efcf4fa\") " pod="calico-system/calico-typha-789465fdc5-wqqrp" Mar 6 01:43:14.154236 kubelet[2584]: I0306 01:43:14.153690 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-var-run-calico\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154236 kubelet[2584]: I0306 01:43:14.153729 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-cni-net-dir\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154236 kubelet[2584]: I0306 01:43:14.153845 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-nodeproc\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154452 kubelet[2584]: I0306 01:43:14.153872 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-sys-fs\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154452 kubelet[2584]: I0306 01:43:14.153899 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-flexvol-driver-host\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154452 kubelet[2584]: I0306 01:43:14.153927 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-cni-bin-dir\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154452 kubelet[2584]: I0306 01:43:14.153967 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-cni-log-dir\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154452 kubelet[2584]: I0306 01:43:14.153999 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-node-certs\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154624 kubelet[2584]: I0306 01:43:14.154023 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-tigera-ca-bundle\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154624 kubelet[2584]: I0306 01:43:14.154074 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-policysync\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154624 kubelet[2584]: I0306 01:43:14.154101 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdtxd\" (UniqueName: \"kubernetes.io/projected/685b682f-7349-42dd-9e38-fb530efcf4fa-kube-api-access-zdtxd\") pod \"calico-typha-789465fdc5-wqqrp\" (UID: \"685b682f-7349-42dd-9e38-fb530efcf4fa\") " pod="calico-system/calico-typha-789465fdc5-wqqrp" Mar 6 01:43:14.154624 kubelet[2584]: I0306 01:43:14.154125 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-bpffs\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154624 kubelet[2584]: I0306 01:43:14.154146 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2856abd4-6b35-4d2e-a7e7-ba7c43f7199d-xtables-lock\") pod \"calico-node-c7bl6\" (UID: \"2856abd4-6b35-4d2e-a7e7-ba7c43f7199d\") " pod="calico-system/calico-node-c7bl6" Mar 6 01:43:14.154907 kubelet[2584]: I0306 01:43:14.154171 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/685b682f-7349-42dd-9e38-fb530efcf4fa-typha-certs\") pod \"calico-typha-789465fdc5-wqqrp\" (UID: \"685b682f-7349-42dd-9e38-fb530efcf4fa\") " pod="calico-system/calico-typha-789465fdc5-wqqrp" Mar 6 01:43:14.235668 kubelet[2584]: E0306 01:43:14.235496 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:14.254838 kubelet[2584]: I0306 01:43:14.254667 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/977c9795-dcad-4a6a-8717-7b63d6db97ee-socket-dir\") pod \"csi-node-driver-df657\" (UID: \"977c9795-dcad-4a6a-8717-7b63d6db97ee\") " pod="calico-system/csi-node-driver-df657" Mar 6 01:43:14.254990 kubelet[2584]: I0306 01:43:14.254858 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/977c9795-dcad-4a6a-8717-7b63d6db97ee-registration-dir\") pod \"csi-node-driver-df657\" (UID: \"977c9795-dcad-4a6a-8717-7b63d6db97ee\") " pod="calico-system/csi-node-driver-df657" Mar 6 01:43:14.254990 kubelet[2584]: I0306 01:43:14.254924 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/977c9795-dcad-4a6a-8717-7b63d6db97ee-varrun\") pod \"csi-node-driver-df657\" (UID: \"977c9795-dcad-4a6a-8717-7b63d6db97ee\") " pod="calico-system/csi-node-driver-df657" Mar 6 01:43:14.255075 kubelet[2584]: I0306 01:43:14.255024 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqvn\" (UniqueName: \"kubernetes.io/projected/977c9795-dcad-4a6a-8717-7b63d6db97ee-kube-api-access-rlqvn\") pod \"csi-node-driver-df657\" (UID: \"977c9795-dcad-4a6a-8717-7b63d6db97ee\") " pod="calico-system/csi-node-driver-df657" Mar 6 01:43:14.255196 kubelet[2584]: I0306 01:43:14.255147 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/977c9795-dcad-4a6a-8717-7b63d6db97ee-kubelet-dir\") pod \"csi-node-driver-df657\" (UID: \"977c9795-dcad-4a6a-8717-7b63d6db97ee\") " pod="calico-system/csi-node-driver-df657" Mar 6 01:43:14.264311 kubelet[2584]: E0306 01:43:14.264271 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.264651 kubelet[2584]: W0306 01:43:14.264477 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.264651 kubelet[2584]: E0306 01:43:14.264599 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.287911 kubelet[2584]: E0306 01:43:14.286912 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.287911 kubelet[2584]: W0306 01:43:14.286951 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.287911 kubelet[2584]: E0306 01:43:14.286980 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.290842 kubelet[2584]: E0306 01:43:14.290730 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.290842 kubelet[2584]: W0306 01:43:14.290825 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.290842 kubelet[2584]: E0306 01:43:14.290841 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.292004 kubelet[2584]: E0306 01:43:14.291939 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.292004 kubelet[2584]: W0306 01:43:14.291982 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.292004 kubelet[2584]: E0306 01:43:14.291994 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.300081 kubelet[2584]: E0306 01:43:14.300007 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.300081 kubelet[2584]: W0306 01:43:14.300062 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.300081 kubelet[2584]: E0306 01:43:14.300083 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.342033 kubelet[2584]: E0306 01:43:14.341885 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:14.342945 containerd[1451]: time="2026-03-06T01:43:14.342867854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789465fdc5-wqqrp,Uid:685b682f-7349-42dd-9e38-fb530efcf4fa,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:14.356265 kubelet[2584]: E0306 01:43:14.356209 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.356265 kubelet[2584]: W0306 01:43:14.356252 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.356265 kubelet[2584]: E0306 01:43:14.356278 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.357343 kubelet[2584]: E0306 01:43:14.356902 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.357343 kubelet[2584]: W0306 01:43:14.356935 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.357343 kubelet[2584]: E0306 01:43:14.356980 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.357597 kubelet[2584]: E0306 01:43:14.357545 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.357597 kubelet[2584]: W0306 01:43:14.357583 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.357668 kubelet[2584]: E0306 01:43:14.357601 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.358154 kubelet[2584]: E0306 01:43:14.358073 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.358154 kubelet[2584]: W0306 01:43:14.358102 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.358154 kubelet[2584]: E0306 01:43:14.358113 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.358401 kubelet[2584]: E0306 01:43:14.358387 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.358401 kubelet[2584]: W0306 01:43:14.358396 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.358451 kubelet[2584]: E0306 01:43:14.358405 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.358895 kubelet[2584]: E0306 01:43:14.358853 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.358895 kubelet[2584]: W0306 01:43:14.358885 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.358895 kubelet[2584]: E0306 01:43:14.358898 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.361983 kubelet[2584]: E0306 01:43:14.361058 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.361983 kubelet[2584]: W0306 01:43:14.361071 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.361983 kubelet[2584]: E0306 01:43:14.361082 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.362161 kubelet[2584]: E0306 01:43:14.362118 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.362161 kubelet[2584]: W0306 01:43:14.362127 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.362161 kubelet[2584]: E0306 01:43:14.362136 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.362472 kubelet[2584]: E0306 01:43:14.362456 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.362552 kubelet[2584]: W0306 01:43:14.362539 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.362677 kubelet[2584]: E0306 01:43:14.362664 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.363682 kubelet[2584]: E0306 01:43:14.363623 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.363682 kubelet[2584]: W0306 01:43:14.363658 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.364323 kubelet[2584]: E0306 01:43:14.363895 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.364553 kubelet[2584]: E0306 01:43:14.364417 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.364553 kubelet[2584]: W0306 01:43:14.364484 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.364553 kubelet[2584]: E0306 01:43:14.364494 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.366092 kubelet[2584]: E0306 01:43:14.365873 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.366092 kubelet[2584]: W0306 01:43:14.365910 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.366092 kubelet[2584]: E0306 01:43:14.365920 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.366270 kubelet[2584]: E0306 01:43:14.366221 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.366270 kubelet[2584]: W0306 01:43:14.366233 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.366270 kubelet[2584]: E0306 01:43:14.366249 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.366729 kubelet[2584]: E0306 01:43:14.366644 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.366729 kubelet[2584]: W0306 01:43:14.366680 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.366729 kubelet[2584]: E0306 01:43:14.366691 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.367215 kubelet[2584]: E0306 01:43:14.367201 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.367215 kubelet[2584]: W0306 01:43:14.367213 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.367215 kubelet[2584]: E0306 01:43:14.367223 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.367639 kubelet[2584]: E0306 01:43:14.367627 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.367639 kubelet[2584]: W0306 01:43:14.367639 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.367699 kubelet[2584]: E0306 01:43:14.367647 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.368251 kubelet[2584]: E0306 01:43:14.368125 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.368251 kubelet[2584]: W0306 01:43:14.368155 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.368251 kubelet[2584]: E0306 01:43:14.368165 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.368557 kubelet[2584]: E0306 01:43:14.368527 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.368623 kubelet[2584]: W0306 01:43:14.368559 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.368623 kubelet[2584]: E0306 01:43:14.368569 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.371234 kubelet[2584]: E0306 01:43:14.371210 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.371234 kubelet[2584]: W0306 01:43:14.371223 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.371234 kubelet[2584]: E0306 01:43:14.371233 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.372009 kubelet[2584]: E0306 01:43:14.371934 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.372009 kubelet[2584]: W0306 01:43:14.371967 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.372009 kubelet[2584]: E0306 01:43:14.371982 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.372408 kubelet[2584]: E0306 01:43:14.372379 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.372408 kubelet[2584]: W0306 01:43:14.372410 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.372408 kubelet[2584]: E0306 01:43:14.372422 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.373554 kubelet[2584]: E0306 01:43:14.373231 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.373554 kubelet[2584]: W0306 01:43:14.373244 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.373554 kubelet[2584]: E0306 01:43:14.373254 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.373689 kubelet[2584]: E0306 01:43:14.373593 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.373689 kubelet[2584]: W0306 01:43:14.373602 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.373689 kubelet[2584]: E0306 01:43:14.373613 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.374528 kubelet[2584]: E0306 01:43:14.374014 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.374528 kubelet[2584]: W0306 01:43:14.374026 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.374528 kubelet[2584]: E0306 01:43:14.374036 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.374952 kubelet[2584]: E0306 01:43:14.374924 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.374992 kubelet[2584]: W0306 01:43:14.374952 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.374992 kubelet[2584]: E0306 01:43:14.374963 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.393208 kubelet[2584]: E0306 01:43:14.392437 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:14.393208 kubelet[2584]: W0306 01:43:14.392467 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:14.393208 kubelet[2584]: E0306 01:43:14.392498 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:14.400468 containerd[1451]: time="2026-03-06T01:43:14.399865363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:14.400468 containerd[1451]: time="2026-03-06T01:43:14.399969136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:14.400468 containerd[1451]: time="2026-03-06T01:43:14.399995435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:14.400998 containerd[1451]: time="2026-03-06T01:43:14.400609542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:14.427331 containerd[1451]: time="2026-03-06T01:43:14.426721002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7bl6,Uid:2856abd4-6b35-4d2e-a7e7-ba7c43f7199d,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:14.463563 systemd[1]: Started cri-containerd-c337eb5a4dc4c1f57c0f6ba7aa5c7b1de22b5e6c78b6aae1d73bb905984a59a0.scope - libcontainer container c337eb5a4dc4c1f57c0f6ba7aa5c7b1de22b5e6c78b6aae1d73bb905984a59a0. Mar 6 01:43:14.542003 containerd[1451]: time="2026-03-06T01:43:14.539996238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:14.542003 containerd[1451]: time="2026-03-06T01:43:14.540070757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:14.542003 containerd[1451]: time="2026-03-06T01:43:14.540088590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:14.542003 containerd[1451]: time="2026-03-06T01:43:14.540213282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:14.579351 systemd[1]: Started cri-containerd-713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a.scope - libcontainer container 713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a. Mar 6 01:43:14.608305 containerd[1451]: time="2026-03-06T01:43:14.608081390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789465fdc5-wqqrp,Uid:685b682f-7349-42dd-9e38-fb530efcf4fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"c337eb5a4dc4c1f57c0f6ba7aa5c7b1de22b5e6c78b6aae1d73bb905984a59a0\"" Mar 6 01:43:14.614691 kubelet[2584]: E0306 01:43:14.614580 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:14.624970 containerd[1451]: time="2026-03-06T01:43:14.624356617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 01:43:14.633702 containerd[1451]: time="2026-03-06T01:43:14.633537398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7bl6,Uid:2856abd4-6b35-4d2e-a7e7-ba7c43f7199d,Namespace:calico-system,Attempt:0,} returns sandbox id \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\"" Mar 6 01:43:15.557297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904362014.mount: Deactivated successfully. Mar 6 01:43:15.647883 kubelet[2584]: E0306 01:43:15.647659 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:16.245862 containerd[1451]: time="2026-03-06T01:43:16.245593382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:16.246960 containerd[1451]: time="2026-03-06T01:43:16.246831128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 01:43:16.248232 containerd[1451]: time="2026-03-06T01:43:16.248104134Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:16.252895 containerd[1451]: time="2026-03-06T01:43:16.252700443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:16.253411 containerd[1451]: time="2026-03-06T01:43:16.253263011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.628784257s" Mar 6 01:43:16.253411 containerd[1451]: time="2026-03-06T01:43:16.253335154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 01:43:16.254710 containerd[1451]: time="2026-03-06T01:43:16.254647329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 01:43:16.296203 containerd[1451]: time="2026-03-06T01:43:16.296128256Z" level=info msg="CreateContainer within sandbox \"c337eb5a4dc4c1f57c0f6ba7aa5c7b1de22b5e6c78b6aae1d73bb905984a59a0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 01:43:16.317100 containerd[1451]: time="2026-03-06T01:43:16.316983228Z" level=info msg="CreateContainer within sandbox \"c337eb5a4dc4c1f57c0f6ba7aa5c7b1de22b5e6c78b6aae1d73bb905984a59a0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"109a1344aa52bfd0c3feab9d22126e6ea257b203bb32f32c5a84d039d35e8616\"" Mar 6 01:43:16.318050 containerd[1451]: time="2026-03-06T01:43:16.317931329Z" level=info msg="StartContainer for \"109a1344aa52bfd0c3feab9d22126e6ea257b203bb32f32c5a84d039d35e8616\"" Mar 6 01:43:16.371035 systemd[1]: Started cri-containerd-109a1344aa52bfd0c3feab9d22126e6ea257b203bb32f32c5a84d039d35e8616.scope - libcontainer container 109a1344aa52bfd0c3feab9d22126e6ea257b203bb32f32c5a84d039d35e8616. Mar 6 01:43:16.439600 containerd[1451]: time="2026-03-06T01:43:16.439551824Z" level=info msg="StartContainer for \"109a1344aa52bfd0c3feab9d22126e6ea257b203bb32f32c5a84d039d35e8616\" returns successfully" Mar 6 01:43:17.206058 kubelet[2584]: E0306 01:43:17.205859 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:17.231187 kubelet[2584]: I0306 01:43:17.231044 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-789465fdc5-wqqrp" podStartSLOduration=2.59547778 podStartE2EDuration="4.231021323s" podCreationTimestamp="2026-03-06 01:43:13 +0000 UTC" firstStartedPulling="2026-03-06 01:43:14.618882483 +0000 UTC m=+23.096361676" lastFinishedPulling="2026-03-06 01:43:16.254426025 +0000 UTC m=+24.731905219" observedRunningTime="2026-03-06 01:43:17.225167858 +0000 UTC m=+25.702647101" watchObservedRunningTime="2026-03-06 01:43:17.231021323 +0000 UTC m=+25.708500517" Mar 6 01:43:17.279871 kubelet[2584]: E0306 01:43:17.279609 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.279871 kubelet[2584]: W0306 01:43:17.279661 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.279871 kubelet[2584]: E0306 01:43:17.279693 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.280369 kubelet[2584]: E0306 01:43:17.280311 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.280369 kubelet[2584]: W0306 01:43:17.280348 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.280369 kubelet[2584]: E0306 01:43:17.280366 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.280979 kubelet[2584]: E0306 01:43:17.280935 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.280979 kubelet[2584]: W0306 01:43:17.280955 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.281060 kubelet[2584]: E0306 01:43:17.280974 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.281560 kubelet[2584]: E0306 01:43:17.281493 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.281560 kubelet[2584]: W0306 01:43:17.281537 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.281560 kubelet[2584]: E0306 01:43:17.281552 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.282081 kubelet[2584]: E0306 01:43:17.282048 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.282142 kubelet[2584]: W0306 01:43:17.282085 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.282142 kubelet[2584]: E0306 01:43:17.282100 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.282498 kubelet[2584]: E0306 01:43:17.282467 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.282553 kubelet[2584]: W0306 01:43:17.282498 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.282553 kubelet[2584]: E0306 01:43:17.282514 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.283057 kubelet[2584]: E0306 01:43:17.282990 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.283057 kubelet[2584]: W0306 01:43:17.283025 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.283057 kubelet[2584]: E0306 01:43:17.283036 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.283469 kubelet[2584]: E0306 01:43:17.283402 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.283469 kubelet[2584]: W0306 01:43:17.283431 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.283469 kubelet[2584]: E0306 01:43:17.283442 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.284105 kubelet[2584]: E0306 01:43:17.283971 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.284105 kubelet[2584]: W0306 01:43:17.284002 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.284105 kubelet[2584]: E0306 01:43:17.284013 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.284515 kubelet[2584]: E0306 01:43:17.284460 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.284515 kubelet[2584]: W0306 01:43:17.284475 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.284515 kubelet[2584]: E0306 01:43:17.284489 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.284961 kubelet[2584]: E0306 01:43:17.284921 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.285015 kubelet[2584]: W0306 01:43:17.284960 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.285015 kubelet[2584]: E0306 01:43:17.284975 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.285483 kubelet[2584]: E0306 01:43:17.285438 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.285554 kubelet[2584]: W0306 01:43:17.285482 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.285554 kubelet[2584]: E0306 01:43:17.285497 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.286028 kubelet[2584]: E0306 01:43:17.285999 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.286077 kubelet[2584]: W0306 01:43:17.286029 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.286077 kubelet[2584]: E0306 01:43:17.286045 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.286429 kubelet[2584]: E0306 01:43:17.286401 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.286472 kubelet[2584]: W0306 01:43:17.286431 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.286472 kubelet[2584]: E0306 01:43:17.286445 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.286989 kubelet[2584]: E0306 01:43:17.286930 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.286989 kubelet[2584]: W0306 01:43:17.286962 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.286989 kubelet[2584]: E0306 01:43:17.286972 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.287483 kubelet[2584]: E0306 01:43:17.287456 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.287528 kubelet[2584]: W0306 01:43:17.287484 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.287528 kubelet[2584]: E0306 01:43:17.287494 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.287982 kubelet[2584]: E0306 01:43:17.287970 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.287982 kubelet[2584]: W0306 01:43:17.287981 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.288047 kubelet[2584]: E0306 01:43:17.287990 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.288423 kubelet[2584]: E0306 01:43:17.288378 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.288423 kubelet[2584]: W0306 01:43:17.288404 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.288423 kubelet[2584]: E0306 01:43:17.288413 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.289071 kubelet[2584]: E0306 01:43:17.288884 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.289071 kubelet[2584]: W0306 01:43:17.288911 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.289071 kubelet[2584]: E0306 01:43:17.288921 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.289366 kubelet[2584]: E0306 01:43:17.289324 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.289366 kubelet[2584]: W0306 01:43:17.289360 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.289451 kubelet[2584]: E0306 01:43:17.289385 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.289849 kubelet[2584]: E0306 01:43:17.289810 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.289849 kubelet[2584]: W0306 01:43:17.289837 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.289849 kubelet[2584]: E0306 01:43:17.289847 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.290332 kubelet[2584]: E0306 01:43:17.290263 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.290332 kubelet[2584]: W0306 01:43:17.290308 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.290332 kubelet[2584]: E0306 01:43:17.290324 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.290911 kubelet[2584]: E0306 01:43:17.290848 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.290911 kubelet[2584]: W0306 01:43:17.290892 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.290911 kubelet[2584]: E0306 01:43:17.290905 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.291424 kubelet[2584]: E0306 01:43:17.291349 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.291424 kubelet[2584]: W0306 01:43:17.291393 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.291424 kubelet[2584]: E0306 01:43:17.291406 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.291982 kubelet[2584]: E0306 01:43:17.291937 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.291982 kubelet[2584]: W0306 01:43:17.291982 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.292099 kubelet[2584]: E0306 01:43:17.291998 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.292505 kubelet[2584]: E0306 01:43:17.292429 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.292505 kubelet[2584]: W0306 01:43:17.292481 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.292505 kubelet[2584]: E0306 01:43:17.292496 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.293017 kubelet[2584]: E0306 01:43:17.292949 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.293017 kubelet[2584]: W0306 01:43:17.292995 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.293017 kubelet[2584]: E0306 01:43:17.293008 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.293550 kubelet[2584]: E0306 01:43:17.293478 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.293550 kubelet[2584]: W0306 01:43:17.293529 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.293550 kubelet[2584]: E0306 01:43:17.293543 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.294121 kubelet[2584]: E0306 01:43:17.294056 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.294121 kubelet[2584]: W0306 01:43:17.294101 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.294121 kubelet[2584]: E0306 01:43:17.294115 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.294901 kubelet[2584]: E0306 01:43:17.294722 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.294901 kubelet[2584]: W0306 01:43:17.294859 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.294901 kubelet[2584]: E0306 01:43:17.294875 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.295341 kubelet[2584]: E0306 01:43:17.295296 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.295341 kubelet[2584]: W0306 01:43:17.295337 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.295423 kubelet[2584]: E0306 01:43:17.295351 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.296136 kubelet[2584]: E0306 01:43:17.296094 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.296136 kubelet[2584]: W0306 01:43:17.296132 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.296241 kubelet[2584]: E0306 01:43:17.296149 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.296638 kubelet[2584]: E0306 01:43:17.296597 2584 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:17.296685 kubelet[2584]: W0306 01:43:17.296639 2584 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:17.296685 kubelet[2584]: E0306 01:43:17.296656 2584 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:17.595118 containerd[1451]: time="2026-03-06T01:43:17.594938651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.597069 containerd[1451]: time="2026-03-06T01:43:17.596934650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 01:43:17.599127 containerd[1451]: time="2026-03-06T01:43:17.599080979Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.603909 containerd[1451]: time="2026-03-06T01:43:17.603835857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.604627 containerd[1451]: time="2026-03-06T01:43:17.604578505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.349857739s" Mar 6 01:43:17.607153 containerd[1451]: time="2026-03-06T01:43:17.604911541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 01:43:17.614508 containerd[1451]: time="2026-03-06T01:43:17.614380412Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 01:43:17.649088 kubelet[2584]: E0306 01:43:17.648991 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:17.682147 containerd[1451]: time="2026-03-06T01:43:17.682007880Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b\"" Mar 6 01:43:17.683232 containerd[1451]: time="2026-03-06T01:43:17.683157473Z" level=info msg="StartContainer for \"73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b\"" Mar 6 01:43:17.727717 systemd[1]: run-containerd-runc-k8s.io-73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b-runc.XYvrWP.mount: Deactivated successfully. Mar 6 01:43:17.742212 systemd[1]: Started cri-containerd-73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b.scope - libcontainer container 73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b. Mar 6 01:43:17.794851 containerd[1451]: time="2026-03-06T01:43:17.794700975Z" level=info msg="StartContainer for \"73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b\" returns successfully" Mar 6 01:43:17.812308 systemd[1]: cri-containerd-73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b.scope: Deactivated successfully. Mar 6 01:43:17.869149 containerd[1451]: time="2026-03-06T01:43:17.869069890Z" level=info msg="shim disconnected" id=73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b namespace=k8s.io Mar 6 01:43:17.869149 containerd[1451]: time="2026-03-06T01:43:17.869142385Z" level=warning msg="cleaning up after shim disconnected" id=73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b namespace=k8s.io Mar 6 01:43:17.869149 containerd[1451]: time="2026-03-06T01:43:17.869152143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:43:18.211209 kubelet[2584]: I0306 01:43:18.210869 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:18.211824 kubelet[2584]: E0306 01:43:18.211308 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:18.219857 containerd[1451]: time="2026-03-06T01:43:18.219823799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 01:43:18.278882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73dae8c2c1babfbfc47273b597fdfc979a56a205d9c2d56afe5b20f58417d97b-rootfs.mount: Deactivated successfully. Mar 6 01:43:19.682217 kubelet[2584]: E0306 01:43:19.682029 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:21.650677 kubelet[2584]: E0306 01:43:21.649499 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:22.795382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219058120.mount: Deactivated successfully. Mar 6 01:43:23.013289 containerd[1451]: time="2026-03-06T01:43:23.013109032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:23.014446 containerd[1451]: time="2026-03-06T01:43:23.014322959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 01:43:23.015574 containerd[1451]: time="2026-03-06T01:43:23.015533481Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:23.018436 containerd[1451]: time="2026-03-06T01:43:23.018381603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:23.055657 containerd[1451]: time="2026-03-06T01:43:23.055448721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.835410342s" Mar 6 01:43:23.055657 containerd[1451]: time="2026-03-06T01:43:23.055533139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 01:43:23.062890 containerd[1451]: time="2026-03-06T01:43:23.062825171Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 01:43:23.142236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461319122.mount: Deactivated successfully. Mar 6 01:43:23.163827 containerd[1451]: time="2026-03-06T01:43:23.163618387Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50\"" Mar 6 01:43:23.166408 containerd[1451]: time="2026-03-06T01:43:23.164896662Z" level=info msg="StartContainer for \"a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50\"" Mar 6 01:43:23.276966 systemd[1]: Started cri-containerd-a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50.scope - libcontainer container a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50. Mar 6 01:43:23.367388 containerd[1451]: time="2026-03-06T01:43:23.367334819Z" level=info msg="StartContainer for \"a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50\" returns successfully" Mar 6 01:43:23.404605 systemd[1]: cri-containerd-a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50.scope: Deactivated successfully. Mar 6 01:43:23.455608 containerd[1451]: time="2026-03-06T01:43:23.455459950Z" level=info msg="shim disconnected" id=a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50 namespace=k8s.io Mar 6 01:43:23.455608 containerd[1451]: time="2026-03-06T01:43:23.455602706Z" level=warning msg="cleaning up after shim disconnected" id=a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50 namespace=k8s.io Mar 6 01:43:23.456009 containerd[1451]: time="2026-03-06T01:43:23.455615670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:43:23.648384 kubelet[2584]: E0306 01:43:23.647991 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:23.795332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4132969e0ce19cc17202930e0e60d28d72fc31e34009b44c69d7430e4e64e50-rootfs.mount: Deactivated successfully. Mar 6 01:43:24.256712 containerd[1451]: time="2026-03-06T01:43:24.256637738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 01:43:25.648203 kubelet[2584]: E0306 01:43:25.647994 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:27.338359 containerd[1451]: time="2026-03-06T01:43:27.338247142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:27.349085 containerd[1451]: time="2026-03-06T01:43:27.348920068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 01:43:27.428839 containerd[1451]: time="2026-03-06T01:43:27.428626343Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:27.442570 containerd[1451]: time="2026-03-06T01:43:27.442468390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:27.443779 containerd[1451]: time="2026-03-06T01:43:27.443672203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.186990443s" Mar 6 01:43:27.443872 containerd[1451]: time="2026-03-06T01:43:27.443834735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 01:43:27.451103 containerd[1451]: time="2026-03-06T01:43:27.451036885Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 01:43:27.480097 containerd[1451]: time="2026-03-06T01:43:27.479520877Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d\"" Mar 6 01:43:27.489146 containerd[1451]: time="2026-03-06T01:43:27.489033677Z" level=info msg="StartContainer for \"9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d\"" Mar 6 01:43:27.578067 systemd[1]: Started cri-containerd-9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d.scope - libcontainer container 9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d. Mar 6 01:43:27.637299 containerd[1451]: time="2026-03-06T01:43:27.637207532Z" level=info msg="StartContainer for \"9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d\" returns successfully" Mar 6 01:43:27.653827 kubelet[2584]: E0306 01:43:27.652859 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:28.402250 systemd[1]: cri-containerd-9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d.scope: Deactivated successfully. Mar 6 01:43:28.431257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d-rootfs.mount: Deactivated successfully. Mar 6 01:43:28.436515 containerd[1451]: time="2026-03-06T01:43:28.436457214Z" level=info msg="shim disconnected" id=9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d namespace=k8s.io Mar 6 01:43:28.436515 containerd[1451]: time="2026-03-06T01:43:28.436500956Z" level=warning msg="cleaning up after shim disconnected" id=9a7c151731d61b35f959b71589eb699485dde61f147f4a232b3ddfe9c302945d namespace=k8s.io Mar 6 01:43:28.436515 containerd[1451]: time="2026-03-06T01:43:28.436509041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:43:28.480434 kubelet[2584]: I0306 01:43:28.480345 2584 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 01:43:28.556135 systemd[1]: Created slice kubepods-besteffort-poda5517b25_89f2_4a90_a131_f286bdda3fd7.slice - libcontainer container kubepods-besteffort-poda5517b25_89f2_4a90_a131_f286bdda3fd7.slice. Mar 6 01:43:28.571880 systemd[1]: Created slice kubepods-burstable-poddff04dd3_ef84_4619_a71a_c275e3897a95.slice - libcontainer container kubepods-burstable-poddff04dd3_ef84_4619_a71a_c275e3897a95.slice. Mar 6 01:43:28.583642 systemd[1]: Created slice kubepods-besteffort-pod00782c1b_bef0_48dd_8d89_f3e72a842b74.slice - libcontainer container kubepods-besteffort-pod00782c1b_bef0_48dd_8d89_f3e72a842b74.slice. Mar 6 01:43:28.593689 systemd[1]: Created slice kubepods-burstable-pod7c745590_fa59_4b3b_8745_5a7c8ee1d2b2.slice - libcontainer container kubepods-burstable-pod7c745590_fa59_4b3b_8745_5a7c8ee1d2b2.slice. Mar 6 01:43:28.603227 systemd[1]: Created slice kubepods-besteffort-pod9d73777b_f4c0_4c9b_90d2_bd41b4633f25.slice - libcontainer container kubepods-besteffort-pod9d73777b_f4c0_4c9b_90d2_bd41b4633f25.slice. Mar 6 01:43:28.610653 systemd[1]: Created slice kubepods-besteffort-poda165b4e4_ca12_4318_93a2_9f1d976fbb5d.slice - libcontainer container kubepods-besteffort-poda165b4e4_ca12_4318_93a2_9f1d976fbb5d.slice. Mar 6 01:43:28.618593 systemd[1]: Created slice kubepods-besteffort-pod29925f97_ffaa_4463_8aba_6f0558d0f689.slice - libcontainer container kubepods-besteffort-pod29925f97_ffaa_4463_8aba_6f0558d0f689.slice. Mar 6 01:43:28.697226 kubelet[2584]: I0306 01:43:28.696987 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvkc9\" (UniqueName: \"kubernetes.io/projected/29925f97-ffaa-4463-8aba-6f0558d0f689-kube-api-access-bvkc9\") pod \"calico-kube-controllers-6c6dbb68d8-nnps9\" (UID: \"29925f97-ffaa-4463-8aba-6f0558d0f689\") " pod="calico-system/calico-kube-controllers-6c6dbb68d8-nnps9" Mar 6 01:43:28.697226 kubelet[2584]: I0306 01:43:28.697054 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-backend-key-pair\") pod \"whisker-5585599fd-r7kcl\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " pod="calico-system/whisker-5585599fd-r7kcl" Mar 6 01:43:28.697226 kubelet[2584]: I0306 01:43:28.697072 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d73777b-f4c0-4c9b-90d2-bd41b4633f25-goldmane-key-pair\") pod \"goldmane-5b85766d88-w9fg5\" (UID: \"9d73777b-f4c0-4c9b-90d2-bd41b4633f25\") " pod="calico-system/goldmane-5b85766d88-w9fg5" Mar 6 01:43:28.697226 kubelet[2584]: I0306 01:43:28.697087 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg87w\" (UniqueName: \"kubernetes.io/projected/9d73777b-f4c0-4c9b-90d2-bd41b4633f25-kube-api-access-pg87w\") pod \"goldmane-5b85766d88-w9fg5\" (UID: \"9d73777b-f4c0-4c9b-90d2-bd41b4633f25\") " pod="calico-system/goldmane-5b85766d88-w9fg5" Mar 6 01:43:28.697226 kubelet[2584]: I0306 01:43:28.697148 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-ca-bundle\") pod \"whisker-5585599fd-r7kcl\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " pod="calico-system/whisker-5585599fd-r7kcl" Mar 6 01:43:28.698018 kubelet[2584]: I0306 01:43:28.697198 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-nginx-config\") pod \"whisker-5585599fd-r7kcl\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " pod="calico-system/whisker-5585599fd-r7kcl" Mar 6 01:43:28.698018 kubelet[2584]: I0306 01:43:28.697242 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29925f97-ffaa-4463-8aba-6f0558d0f689-tigera-ca-bundle\") pod \"calico-kube-controllers-6c6dbb68d8-nnps9\" (UID: \"29925f97-ffaa-4463-8aba-6f0558d0f689\") " pod="calico-system/calico-kube-controllers-6c6dbb68d8-nnps9" Mar 6 01:43:28.698018 kubelet[2584]: I0306 01:43:28.697281 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gl84\" (UniqueName: \"kubernetes.io/projected/a5517b25-89f2-4a90-a131-f286bdda3fd7-kube-api-access-2gl84\") pod \"whisker-5585599fd-r7kcl\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " pod="calico-system/whisker-5585599fd-r7kcl" Mar 6 01:43:28.698018 kubelet[2584]: I0306 01:43:28.697307 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5sqh\" (UniqueName: \"kubernetes.io/projected/00782c1b-bef0-48dd-8d89-f3e72a842b74-kube-api-access-d5sqh\") pod \"calico-apiserver-8687f94789-jnpd7\" (UID: \"00782c1b-bef0-48dd-8d89-f3e72a842b74\") " pod="calico-system/calico-apiserver-8687f94789-jnpd7" Mar 6 01:43:28.698018 kubelet[2584]: I0306 01:43:28.697333 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d73777b-f4c0-4c9b-90d2-bd41b4633f25-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-w9fg5\" (UID: \"9d73777b-f4c0-4c9b-90d2-bd41b4633f25\") " pod="calico-system/goldmane-5b85766d88-w9fg5" Mar 6 01:43:28.698215 kubelet[2584]: I0306 01:43:28.697359 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00782c1b-bef0-48dd-8d89-f3e72a842b74-calico-apiserver-certs\") pod \"calico-apiserver-8687f94789-jnpd7\" (UID: \"00782c1b-bef0-48dd-8d89-f3e72a842b74\") " pod="calico-system/calico-apiserver-8687f94789-jnpd7" Mar 6 01:43:28.698215 kubelet[2584]: I0306 01:43:28.697385 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c745590-fa59-4b3b-8745-5a7c8ee1d2b2-config-volume\") pod \"coredns-674b8bbfcf-b5xzx\" (UID: \"7c745590-fa59-4b3b-8745-5a7c8ee1d2b2\") " pod="kube-system/coredns-674b8bbfcf-b5xzx" Mar 6 01:43:28.698215 kubelet[2584]: I0306 01:43:28.697418 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a165b4e4-ca12-4318-93a2-9f1d976fbb5d-calico-apiserver-certs\") pod \"calico-apiserver-8687f94789-q6vc5\" (UID: \"a165b4e4-ca12-4318-93a2-9f1d976fbb5d\") " pod="calico-system/calico-apiserver-8687f94789-q6vc5" Mar 6 01:43:28.698215 kubelet[2584]: I0306 01:43:28.697445 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5q9\" (UniqueName: \"kubernetes.io/projected/a165b4e4-ca12-4318-93a2-9f1d976fbb5d-kube-api-access-8k5q9\") pod \"calico-apiserver-8687f94789-q6vc5\" (UID: \"a165b4e4-ca12-4318-93a2-9f1d976fbb5d\") " pod="calico-system/calico-apiserver-8687f94789-q6vc5" Mar 6 01:43:28.698215 kubelet[2584]: I0306 01:43:28.697466 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6ldr\" (UniqueName: \"kubernetes.io/projected/dff04dd3-ef84-4619-a71a-c275e3897a95-kube-api-access-h6ldr\") pod \"coredns-674b8bbfcf-nh5sh\" (UID: \"dff04dd3-ef84-4619-a71a-c275e3897a95\") " pod="kube-system/coredns-674b8bbfcf-nh5sh" Mar 6 01:43:28.698328 kubelet[2584]: I0306 01:43:28.697481 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff04dd3-ef84-4619-a71a-c275e3897a95-config-volume\") pod \"coredns-674b8bbfcf-nh5sh\" (UID: \"dff04dd3-ef84-4619-a71a-c275e3897a95\") " pod="kube-system/coredns-674b8bbfcf-nh5sh" Mar 6 01:43:28.698328 kubelet[2584]: I0306 01:43:28.697508 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkjz\" (UniqueName: \"kubernetes.io/projected/7c745590-fa59-4b3b-8745-5a7c8ee1d2b2-kube-api-access-8jkjz\") pod \"coredns-674b8bbfcf-b5xzx\" (UID: \"7c745590-fa59-4b3b-8745-5a7c8ee1d2b2\") " pod="kube-system/coredns-674b8bbfcf-b5xzx" Mar 6 01:43:28.698328 kubelet[2584]: I0306 01:43:28.697548 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d73777b-f4c0-4c9b-90d2-bd41b4633f25-config\") pod \"goldmane-5b85766d88-w9fg5\" (UID: \"9d73777b-f4c0-4c9b-90d2-bd41b4633f25\") " pod="calico-system/goldmane-5b85766d88-w9fg5" Mar 6 01:43:28.868503 containerd[1451]: time="2026-03-06T01:43:28.868224171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5585599fd-r7kcl,Uid:a5517b25-89f2-4a90-a131-f286bdda3fd7,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:28.878913 kubelet[2584]: E0306 01:43:28.878672 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:28.880061 containerd[1451]: time="2026-03-06T01:43:28.879990084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nh5sh,Uid:dff04dd3-ef84-4619-a71a-c275e3897a95,Namespace:kube-system,Attempt:0,}" Mar 6 01:43:28.889557 containerd[1451]: time="2026-03-06T01:43:28.889497702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-jnpd7,Uid:00782c1b-bef0-48dd-8d89-f3e72a842b74,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:28.903366 kubelet[2584]: E0306 01:43:28.903248 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:28.905281 containerd[1451]: time="2026-03-06T01:43:28.904073277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b5xzx,Uid:7c745590-fa59-4b3b-8745-5a7c8ee1d2b2,Namespace:kube-system,Attempt:0,}" Mar 6 01:43:28.908192 containerd[1451]: time="2026-03-06T01:43:28.908132216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-w9fg5,Uid:9d73777b-f4c0-4c9b-90d2-bd41b4633f25,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:28.917554 containerd[1451]: time="2026-03-06T01:43:28.917311235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-q6vc5,Uid:a165b4e4-ca12-4318-93a2-9f1d976fbb5d,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:28.936255 containerd[1451]: time="2026-03-06T01:43:28.936051362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6dbb68d8-nnps9,Uid:29925f97-ffaa-4463-8aba-6f0558d0f689,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:29.356826 containerd[1451]: time="2026-03-06T01:43:29.355890549Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 01:43:29.409612 containerd[1451]: time="2026-03-06T01:43:29.409554478Z" level=info msg="CreateContainer within sandbox \"713d2885fa8d7d20d6b3d662be9bb81f30905bc71c853dcd9cef0b32edea906a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fe8dbc9108d7d640b9f83f66cbe60f112eab6c947e99222d7b2b3ff45769bb2f\"" Mar 6 01:43:29.412523 containerd[1451]: time="2026-03-06T01:43:29.411150212Z" level=info msg="StartContainer for \"fe8dbc9108d7d640b9f83f66cbe60f112eab6c947e99222d7b2b3ff45769bb2f\"" Mar 6 01:43:29.433800 containerd[1451]: time="2026-03-06T01:43:29.433585901Z" level=error msg="Failed to destroy network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.439530 containerd[1451]: time="2026-03-06T01:43:29.439435127Z" level=error msg="encountered an error cleaning up failed sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.440007 containerd[1451]: time="2026-03-06T01:43:29.439562574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5585599fd-r7kcl,Uid:a5517b25-89f2-4a90-a131-f286bdda3fd7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.440632 containerd[1451]: time="2026-03-06T01:43:29.440604524Z" level=error msg="Failed to destroy network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.441301 containerd[1451]: time="2026-03-06T01:43:29.441263889Z" level=error msg="encountered an error cleaning up failed sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.441443 containerd[1451]: time="2026-03-06T01:43:29.441413997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b5xzx,Uid:7c745590-fa59-4b3b-8745-5a7c8ee1d2b2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.448108 kubelet[2584]: E0306 01:43:29.447993 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.448246 kubelet[2584]: E0306 01:43:29.448106 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.448246 kubelet[2584]: E0306 01:43:29.448162 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5585599fd-r7kcl" Mar 6 01:43:29.448246 kubelet[2584]: E0306 01:43:29.448190 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5585599fd-r7kcl" Mar 6 01:43:29.448352 kubelet[2584]: E0306 01:43:29.448247 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5585599fd-r7kcl_calico-system(a5517b25-89f2-4a90-a131-f286bdda3fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5585599fd-r7kcl_calico-system(a5517b25-89f2-4a90-a131-f286bdda3fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5585599fd-r7kcl" podUID="a5517b25-89f2-4a90-a131-f286bdda3fd7" Mar 6 01:43:29.448352 kubelet[2584]: E0306 01:43:29.448126 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b5xzx" Mar 6 01:43:29.453031 kubelet[2584]: E0306 01:43:29.448328 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b5xzx" Mar 6 01:43:29.453031 kubelet[2584]: E0306 01:43:29.448674 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b5xzx_kube-system(7c745590-fa59-4b3b-8745-5a7c8ee1d2b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b5xzx_kube-system(7c745590-fa59-4b3b-8745-5a7c8ee1d2b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b5xzx" podUID="7c745590-fa59-4b3b-8745-5a7c8ee1d2b2" Mar 6 01:43:29.494098 containerd[1451]: time="2026-03-06T01:43:29.493938706Z" level=error msg="Failed to destroy network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.495218 containerd[1451]: time="2026-03-06T01:43:29.494014105Z" level=error msg="Failed to destroy network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.498606 containerd[1451]: time="2026-03-06T01:43:29.495176325Z" level=error msg="encountered an error cleaning up failed sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.500011 containerd[1451]: time="2026-03-06T01:43:29.499884722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nh5sh,Uid:dff04dd3-ef84-4619-a71a-c275e3897a95,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.502140 containerd[1451]: time="2026-03-06T01:43:29.500504453Z" level=error msg="encountered an error cleaning up failed sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.502140 containerd[1451]: time="2026-03-06T01:43:29.500610439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-q6vc5,Uid:a165b4e4-ca12-4318-93a2-9f1d976fbb5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.504278 kubelet[2584]: E0306 01:43:29.502307 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.504278 kubelet[2584]: E0306 01:43:29.502677 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.504278 kubelet[2584]: E0306 01:43:29.502892 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nh5sh" Mar 6 01:43:29.504278 kubelet[2584]: E0306 01:43:29.502937 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nh5sh" Mar 6 01:43:29.505081 kubelet[2584]: E0306 01:43:29.503010 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8687f94789-q6vc5" Mar 6 01:43:29.505081 kubelet[2584]: E0306 01:43:29.503156 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8687f94789-q6vc5" Mar 6 01:43:29.505081 kubelet[2584]: E0306 01:43:29.504167 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8687f94789-q6vc5_calico-system(a165b4e4-ca12-4318-93a2-9f1d976fbb5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8687f94789-q6vc5_calico-system(a165b4e4-ca12-4318-93a2-9f1d976fbb5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8687f94789-q6vc5" podUID="a165b4e4-ca12-4318-93a2-9f1d976fbb5d" Mar 6 01:43:29.505282 kubelet[2584]: E0306 01:43:29.504389 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nh5sh_kube-system(dff04dd3-ef84-4619-a71a-c275e3897a95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nh5sh_kube-system(dff04dd3-ef84-4619-a71a-c275e3897a95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nh5sh" podUID="dff04dd3-ef84-4619-a71a-c275e3897a95" Mar 6 01:43:29.529676 containerd[1451]: time="2026-03-06T01:43:29.529372300Z" level=error msg="Failed to destroy network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.532231 containerd[1451]: time="2026-03-06T01:43:29.532117780Z" level=error msg="encountered an error cleaning up failed sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.532231 containerd[1451]: time="2026-03-06T01:43:29.532181819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-w9fg5,Uid:9d73777b-f4c0-4c9b-90d2-bd41b4633f25,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.533866 kubelet[2584]: E0306 01:43:29.533024 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.533866 kubelet[2584]: E0306 01:43:29.533098 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-w9fg5" Mar 6 01:43:29.533866 kubelet[2584]: E0306 01:43:29.533188 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-w9fg5" Mar 6 01:43:29.534145 kubelet[2584]: E0306 01:43:29.533379 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-w9fg5_calico-system(9d73777b-f4c0-4c9b-90d2-bd41b4633f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-w9fg5_calico-system(9d73777b-f4c0-4c9b-90d2-bd41b4633f25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-w9fg5" podUID="9d73777b-f4c0-4c9b-90d2-bd41b4633f25" Mar 6 01:43:29.535000 containerd[1451]: time="2026-03-06T01:43:29.534959358Z" level=error msg="Failed to destroy network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.535661 containerd[1451]: time="2026-03-06T01:43:29.535535860Z" level=error msg="encountered an error cleaning up failed sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.535661 containerd[1451]: time="2026-03-06T01:43:29.535608665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6dbb68d8-nnps9,Uid:29925f97-ffaa-4463-8aba-6f0558d0f689,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.536865 kubelet[2584]: E0306 01:43:29.536394 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.536865 kubelet[2584]: E0306 01:43:29.536466 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c6dbb68d8-nnps9" Mar 6 01:43:29.536865 kubelet[2584]: E0306 01:43:29.536492 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c6dbb68d8-nnps9" Mar 6 01:43:29.537005 kubelet[2584]: E0306 01:43:29.536595 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c6dbb68d8-nnps9_calico-system(29925f97-ffaa-4463-8aba-6f0558d0f689)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c6dbb68d8-nnps9_calico-system(29925f97-ffaa-4463-8aba-6f0558d0f689)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c6dbb68d8-nnps9" podUID="29925f97-ffaa-4463-8aba-6f0558d0f689" Mar 6 01:43:29.540539 containerd[1451]: time="2026-03-06T01:43:29.540329845Z" level=error msg="Failed to destroy network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.541108 containerd[1451]: time="2026-03-06T01:43:29.541021330Z" level=error msg="encountered an error cleaning up failed sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.541172 containerd[1451]: time="2026-03-06T01:43:29.541103412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-jnpd7,Uid:00782c1b-bef0-48dd-8d89-f3e72a842b74,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.541491 kubelet[2584]: E0306 01:43:29.541417 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.541529 kubelet[2584]: E0306 01:43:29.541494 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8687f94789-jnpd7" Mar 6 01:43:29.541529 kubelet[2584]: E0306 01:43:29.541519 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8687f94789-jnpd7" Mar 6 01:43:29.541585 kubelet[2584]: E0306 01:43:29.541565 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8687f94789-jnpd7_calico-system(00782c1b-bef0-48dd-8d89-f3e72a842b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8687f94789-jnpd7_calico-system(00782c1b-bef0-48dd-8d89-f3e72a842b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8687f94789-jnpd7" podUID="00782c1b-bef0-48dd-8d89-f3e72a842b74" Mar 6 01:43:29.546174 systemd[1]: Started cri-containerd-fe8dbc9108d7d640b9f83f66cbe60f112eab6c947e99222d7b2b3ff45769bb2f.scope - libcontainer container fe8dbc9108d7d640b9f83f66cbe60f112eab6c947e99222d7b2b3ff45769bb2f. Mar 6 01:43:29.620383 containerd[1451]: time="2026-03-06T01:43:29.620324548Z" level=info msg="StartContainer for \"fe8dbc9108d7d640b9f83f66cbe60f112eab6c947e99222d7b2b3ff45769bb2f\" returns successfully" Mar 6 01:43:29.658642 systemd[1]: Created slice kubepods-besteffort-pod977c9795_dcad_4a6a_8717_7b63d6db97ee.slice - libcontainer container kubepods-besteffort-pod977c9795_dcad_4a6a_8717_7b63d6db97ee.slice. Mar 6 01:43:29.663707 containerd[1451]: time="2026-03-06T01:43:29.662624994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-df657,Uid:977c9795-dcad-4a6a-8717-7b63d6db97ee,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:29.799664 containerd[1451]: time="2026-03-06T01:43:29.799092847Z" level=error msg="Failed to destroy network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.799664 containerd[1451]: time="2026-03-06T01:43:29.799494142Z" level=error msg="encountered an error cleaning up failed sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.799664 containerd[1451]: time="2026-03-06T01:43:29.799539045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-df657,Uid:977c9795-dcad-4a6a-8717-7b63d6db97ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.799982 kubelet[2584]: E0306 01:43:29.799878 2584 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:29.799982 kubelet[2584]: E0306 01:43:29.799929 2584 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-df657" Mar 6 01:43:29.799982 kubelet[2584]: E0306 01:43:29.799952 2584 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-df657" Mar 6 01:43:29.800410 kubelet[2584]: E0306 01:43:29.799995 2584 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-df657_calico-system(977c9795-dcad-4a6a-8717-7b63d6db97ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-df657_calico-system(977c9795-dcad-4a6a-8717-7b63d6db97ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-df657" podUID="977c9795-dcad-4a6a-8717-7b63d6db97ee" Mar 6 01:43:29.820432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12-shm.mount: Deactivated successfully. Mar 6 01:43:30.324288 kubelet[2584]: I0306 01:43:30.324056 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:30.325470 kubelet[2584]: I0306 01:43:30.325409 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:30.331428 kubelet[2584]: I0306 01:43:30.331283 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:30.334419 kubelet[2584]: I0306 01:43:30.334361 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:30.338839 containerd[1451]: time="2026-03-06T01:43:30.338628136Z" level=info msg="StopPodSandbox for \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\"" Mar 6 01:43:30.338839 containerd[1451]: time="2026-03-06T01:43:30.338661688Z" level=info msg="StopPodSandbox for \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\"" Mar 6 01:43:30.340541 containerd[1451]: time="2026-03-06T01:43:30.340436037Z" level=info msg="StopPodSandbox for \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\"" Mar 6 01:43:30.342232 containerd[1451]: time="2026-03-06T01:43:30.342131304Z" level=info msg="StopPodSandbox for \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\"" Mar 6 01:43:30.343652 containerd[1451]: time="2026-03-06T01:43:30.343573063Z" level=info msg="Ensure that sandbox 1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12 in task-service has been cleanup successfully" Mar 6 01:43:30.343652 containerd[1451]: time="2026-03-06T01:43:30.343616115Z" level=info msg="Ensure that sandbox 72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036 in task-service has been cleanup successfully" Mar 6 01:43:30.344244 containerd[1451]: time="2026-03-06T01:43:30.343918980Z" level=info msg="Ensure that sandbox 751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e in task-service has been cleanup successfully" Mar 6 01:43:30.344608 containerd[1451]: time="2026-03-06T01:43:30.343617266Z" level=info msg="Ensure that sandbox e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f in task-service has been cleanup successfully" Mar 6 01:43:30.354036 kubelet[2584]: I0306 01:43:30.353946 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:30.355657 containerd[1451]: time="2026-03-06T01:43:30.355547653Z" level=info msg="StopPodSandbox for \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\"" Mar 6 01:43:30.358374 containerd[1451]: time="2026-03-06T01:43:30.358120303Z" level=info msg="Ensure that sandbox df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40 in task-service has been cleanup successfully" Mar 6 01:43:30.404139 kubelet[2584]: I0306 01:43:30.403862 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:30.406511 containerd[1451]: time="2026-03-06T01:43:30.406065715Z" level=info msg="StopPodSandbox for \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\"" Mar 6 01:43:30.406511 containerd[1451]: time="2026-03-06T01:43:30.406251971Z" level=info msg="Ensure that sandbox aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5 in task-service has been cleanup successfully" Mar 6 01:43:30.417270 kubelet[2584]: I0306 01:43:30.417173 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:30.427890 containerd[1451]: time="2026-03-06T01:43:30.427624720Z" level=info msg="StopPodSandbox for \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\"" Mar 6 01:43:30.428170 containerd[1451]: time="2026-03-06T01:43:30.428117865Z" level=info msg="Ensure that sandbox 956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2 in task-service has been cleanup successfully" Mar 6 01:43:30.445977 kubelet[2584]: I0306 01:43:30.445933 2584 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:30.448852 containerd[1451]: time="2026-03-06T01:43:30.448700175Z" level=info msg="StopPodSandbox for \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\"" Mar 6 01:43:30.450530 containerd[1451]: time="2026-03-06T01:43:30.450494369Z" level=info msg="Ensure that sandbox db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440 in task-service has been cleanup successfully" Mar 6 01:43:30.616519 kubelet[2584]: I0306 01:43:30.616445 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c7bl6" podStartSLOduration=3.8073972190000003 podStartE2EDuration="16.616414593s" podCreationTimestamp="2026-03-06 01:43:14 +0000 UTC" firstStartedPulling="2026-03-06 01:43:14.635942744 +0000 UTC m=+23.113421947" lastFinishedPulling="2026-03-06 01:43:27.444960127 +0000 UTC m=+35.922439321" observedRunningTime="2026-03-06 01:43:30.42562836 +0000 UTC m=+38.903107583" watchObservedRunningTime="2026-03-06 01:43:30.616414593 +0000 UTC m=+39.093893816" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.629 [INFO][3802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.630 [INFO][3802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" iface="eth0" netns="/var/run/netns/cni-dfa093ed-16af-6222-ddc5-7d89f58c9d0c" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.630 [INFO][3802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" iface="eth0" netns="/var/run/netns/cni-dfa093ed-16af-6222-ddc5-7d89f58c9d0c" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" iface="eth0" netns="/var/run/netns/cni-dfa093ed-16af-6222-ddc5-7d89f58c9d0c" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.725 [INFO][3901] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.732 [INFO][3901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.732 [INFO][3901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.762 [WARNING][3901] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.762 [INFO][3901] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.785 [INFO][3901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:30.801795 containerd[1451]: 2026-03-06 01:43:30.795 [INFO][3802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:30.809586 containerd[1451]: time="2026-03-06T01:43:30.809447723Z" level=info msg="TearDown network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\" successfully" Mar 6 01:43:30.809586 containerd[1451]: time="2026-03-06T01:43:30.809495441Z" level=info msg="StopPodSandbox for \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\" returns successfully" Mar 6 01:43:30.810581 systemd[1]: run-netns-cni\x2ddfa093ed\x2d16af\x2d6222\x2dddc5\x2d7d89f58c9d0c.mount: Deactivated successfully. Mar 6 01:43:30.835366 kubelet[2584]: I0306 01:43:30.834975 2584 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-ca-bundle\") pod \"a5517b25-89f2-4a90-a131-f286bdda3fd7\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " Mar 6 01:43:30.835366 kubelet[2584]: I0306 01:43:30.835054 2584 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-nginx-config\") pod \"a5517b25-89f2-4a90-a131-f286bdda3fd7\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " Mar 6 01:43:30.835366 kubelet[2584]: I0306 01:43:30.835087 2584 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gl84\" (UniqueName: \"kubernetes.io/projected/a5517b25-89f2-4a90-a131-f286bdda3fd7-kube-api-access-2gl84\") pod \"a5517b25-89f2-4a90-a131-f286bdda3fd7\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " Mar 6 01:43:30.835366 kubelet[2584]: I0306 01:43:30.835124 2584 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-backend-key-pair\") pod \"a5517b25-89f2-4a90-a131-f286bdda3fd7\" (UID: \"a5517b25-89f2-4a90-a131-f286bdda3fd7\") " Mar 6 01:43:30.837595 kubelet[2584]: I0306 01:43:30.837288 2584 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a5517b25-89f2-4a90-a131-f286bdda3fd7" (UID: "a5517b25-89f2-4a90-a131-f286bdda3fd7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:43:30.837595 kubelet[2584]: I0306 01:43:30.837494 2584 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a5517b25-89f2-4a90-a131-f286bdda3fd7" (UID: "a5517b25-89f2-4a90-a131-f286bdda3fd7"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:43:30.846308 kubelet[2584]: I0306 01:43:30.846247 2584 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a5517b25-89f2-4a90-a131-f286bdda3fd7" (UID: "a5517b25-89f2-4a90-a131-f286bdda3fd7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 01:43:30.852697 systemd[1]: var-lib-kubelet-pods-a5517b25\x2d89f2\x2d4a90\x2da131\x2df286bdda3fd7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 01:43:30.861684 systemd[1]: var-lib-kubelet-pods-a5517b25\x2d89f2\x2d4a90\x2da131\x2df286bdda3fd7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2gl84.mount: Deactivated successfully. Mar 6 01:43:30.862431 kubelet[2584]: I0306 01:43:30.862383 2584 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5517b25-89f2-4a90-a131-f286bdda3fd7-kube-api-access-2gl84" (OuterVolumeSpecName: "kube-api-access-2gl84") pod "a5517b25-89f2-4a90-a131-f286bdda3fd7" (UID: "a5517b25-89f2-4a90-a131-f286bdda3fd7"). InnerVolumeSpecName "kube-api-access-2gl84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.634 [INFO][3795] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" iface="eth0" netns="/var/run/netns/cni-c24622ac-5997-4f1e-40be-c8a9780db0ae" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" iface="eth0" netns="/var/run/netns/cni-c24622ac-5997-4f1e-40be-c8a9780db0ae" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3795] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" iface="eth0" netns="/var/run/netns/cni-c24622ac-5997-4f1e-40be-c8a9780db0ae" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3795] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3795] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.814 [INFO][3900] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.815 [INFO][3900] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.815 [INFO][3900] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.837 [WARNING][3900] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.837 [INFO][3900] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.842 [INFO][3900] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:30.884237 containerd[1451]: 2026-03-06 01:43:30.868 [INFO][3795] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:30.889363 containerd[1451]: time="2026-03-06T01:43:30.887644267Z" level=info msg="TearDown network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\" successfully" Mar 6 01:43:30.889363 containerd[1451]: time="2026-03-06T01:43:30.889147451Z" level=info msg="StopPodSandbox for \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\" returns successfully" Mar 6 01:43:30.891527 systemd[1]: run-netns-cni\x2dc24622ac\x2d5997\x2d4f1e\x2d40be\x2dc8a9780db0ae.mount: Deactivated successfully. Mar 6 01:43:30.898389 containerd[1451]: time="2026-03-06T01:43:30.898360669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6dbb68d8-nnps9,Uid:29925f97-ffaa-4463-8aba-6f0558d0f689,Namespace:calico-system,Attempt:1,}" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.630 [INFO][3803] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.632 [INFO][3803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" iface="eth0" netns="/var/run/netns/cni-787735f6-fa27-fe3c-78ad-4eb4b7a8af9d" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.633 [INFO][3803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" iface="eth0" netns="/var/run/netns/cni-787735f6-fa27-fe3c-78ad-4eb4b7a8af9d" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.633 [INFO][3803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" iface="eth0" netns="/var/run/netns/cni-787735f6-fa27-fe3c-78ad-4eb4b7a8af9d" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.634 [INFO][3803] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.634 [INFO][3803] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.858 [INFO][3896] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.866 [INFO][3896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.866 [INFO][3896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.886 [WARNING][3896] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.886 [INFO][3896] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.888 [INFO][3896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:30.911348 containerd[1451]: 2026-03-06 01:43:30.901 [INFO][3803] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:30.919637 containerd[1451]: time="2026-03-06T01:43:30.919442533Z" level=info msg="TearDown network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\" successfully" Mar 6 01:43:30.919637 containerd[1451]: time="2026-03-06T01:43:30.919597482Z" level=info msg="StopPodSandbox for \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\" returns successfully" Mar 6 01:43:30.920184 systemd[1]: run-netns-cni\x2d787735f6\x2dfa27\x2dfe3c\x2d78ad\x2d4eb4b7a8af9d.mount: Deactivated successfully. Mar 6 01:43:30.925492 containerd[1451]: time="2026-03-06T01:43:30.925453441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-df657,Uid:977c9795-dcad-4a6a-8717-7b63d6db97ee,Namespace:calico-system,Attempt:1,}" Mar 6 01:43:30.937172 kubelet[2584]: I0306 01:43:30.937076 2584 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:30.937172 kubelet[2584]: I0306 01:43:30.937117 2584 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:30.937172 kubelet[2584]: I0306 01:43:30.937132 2584 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5517b25-89f2-4a90-a131-f286bdda3fd7-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:30.937172 kubelet[2584]: I0306 01:43:30.937144 2584 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gl84\" (UniqueName: \"kubernetes.io/projected/a5517b25-89f2-4a90-a131-f286bdda3fd7-kube-api-access-2gl84\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.719 [INFO][3822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.722 [INFO][3822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" iface="eth0" netns="/var/run/netns/cni-5b187fbc-608e-e748-88a9-c0ef1eb196bb" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.722 [INFO][3822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" iface="eth0" netns="/var/run/netns/cni-5b187fbc-608e-e748-88a9-c0ef1eb196bb" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.724 [INFO][3822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" iface="eth0" netns="/var/run/netns/cni-5b187fbc-608e-e748-88a9-c0ef1eb196bb" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.724 [INFO][3822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.724 [INFO][3822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.888 [INFO][3937] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.891 [INFO][3937] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.891 [INFO][3937] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.905 [WARNING][3937] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.906 [INFO][3937] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.909 [INFO][3937] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:30.973582 containerd[1451]: 2026-03-06 01:43:30.932 [INFO][3822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:30.976173 containerd[1451]: time="2026-03-06T01:43:30.975960068Z" level=info msg="TearDown network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\" successfully" Mar 6 01:43:30.976173 containerd[1451]: time="2026-03-06T01:43:30.976000874Z" level=info msg="StopPodSandbox for \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\" returns successfully" Mar 6 01:43:30.977590 kubelet[2584]: E0306 01:43:30.976533 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:30.978027 containerd[1451]: time="2026-03-06T01:43:30.977943456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b5xzx,Uid:7c745590-fa59-4b3b-8745-5a7c8ee1d2b2,Namespace:kube-system,Attempt:1,}" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.622 [INFO][3791] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.625 [INFO][3791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" iface="eth0" netns="/var/run/netns/cni-b0cd67cb-94c2-2ab5-6925-f00adc2aeb56" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.628 [INFO][3791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" iface="eth0" netns="/var/run/netns/cni-b0cd67cb-94c2-2ab5-6925-f00adc2aeb56" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" iface="eth0" netns="/var/run/netns/cni-b0cd67cb-94c2-2ab5-6925-f00adc2aeb56" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.635 [INFO][3791] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.636 [INFO][3791] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.895 [INFO][3902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.895 [INFO][3902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.913 [INFO][3902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.933 [WARNING][3902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.933 [INFO][3902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.937 [INFO][3902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:30.980043 containerd[1451]: 2026-03-06 01:43:30.971 [INFO][3791] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:30.981228 containerd[1451]: time="2026-03-06T01:43:30.981035097Z" level=info msg="TearDown network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\" successfully" Mar 6 01:43:30.981228 containerd[1451]: time="2026-03-06T01:43:30.981095981Z" level=info msg="StopPodSandbox for \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\" returns successfully" Mar 6 01:43:30.981694 kubelet[2584]: E0306 01:43:30.981587 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:30.983018 containerd[1451]: time="2026-03-06T01:43:30.982682969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nh5sh,Uid:dff04dd3-ef84-4619-a71a-c275e3897a95,Namespace:kube-system,Attempt:1,}" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.728 [INFO][3857] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.730 [INFO][3857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" iface="eth0" netns="/var/run/netns/cni-05bb082d-4696-88f3-7830-c92cadeef9ee" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.730 [INFO][3857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" iface="eth0" netns="/var/run/netns/cni-05bb082d-4696-88f3-7830-c92cadeef9ee" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.730 [INFO][3857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" iface="eth0" netns="/var/run/netns/cni-05bb082d-4696-88f3-7830-c92cadeef9ee" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.730 [INFO][3857] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.730 [INFO][3857] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.938 [INFO][3939] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.951 [INFO][3939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.974 [INFO][3939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.999 [WARNING][3939] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:30.999 [INFO][3939] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:31.002 [INFO][3939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.021246 containerd[1451]: 2026-03-06 01:43:31.008 [INFO][3857] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:31.022159 containerd[1451]: time="2026-03-06T01:43:31.022121928Z" level=info msg="TearDown network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\" successfully" Mar 6 01:43:31.022264 containerd[1451]: time="2026-03-06T01:43:31.022246460Z" level=info msg="StopPodSandbox for \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\" returns successfully" Mar 6 01:43:31.026532 containerd[1451]: time="2026-03-06T01:43:31.026391802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-q6vc5,Uid:a165b4e4-ca12-4318-93a2-9f1d976fbb5d,Namespace:calico-system,Attempt:1,}" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.786 [INFO][3864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.791 [INFO][3864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" iface="eth0" netns="/var/run/netns/cni-aed2eb61-3ecc-e5b1-cbd5-f3ec6f017511" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.795 [INFO][3864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" iface="eth0" netns="/var/run/netns/cni-aed2eb61-3ecc-e5b1-cbd5-f3ec6f017511" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.796 [INFO][3864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" iface="eth0" netns="/var/run/netns/cni-aed2eb61-3ecc-e5b1-cbd5-f3ec6f017511" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.796 [INFO][3864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.796 [INFO][3864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.937 [INFO][3956] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.937 [INFO][3956] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.937 [INFO][3956] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.959 [WARNING][3956] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.960 [INFO][3956] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:30.974 [INFO][3956] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.039387 containerd[1451]: 2026-03-06 01:43:31.012 [INFO][3864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:31.040634 containerd[1451]: time="2026-03-06T01:43:31.039677720Z" level=info msg="TearDown network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\" successfully" Mar 6 01:43:31.040634 containerd[1451]: time="2026-03-06T01:43:31.039827809Z" level=info msg="StopPodSandbox for \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\" returns successfully" Mar 6 01:43:31.040818 containerd[1451]: time="2026-03-06T01:43:31.040688638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-w9fg5,Uid:9d73777b-f4c0-4c9b-90d2-bd41b4633f25,Namespace:calico-system,Attempt:1,}" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.843 [INFO][3872] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.844 [INFO][3872] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" iface="eth0" netns="/var/run/netns/cni-b4bebaa0-29de-d0e0-68c6-b0f59edb10c3" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.846 [INFO][3872] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" iface="eth0" netns="/var/run/netns/cni-b4bebaa0-29de-d0e0-68c6-b0f59edb10c3" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.855 [INFO][3872] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" iface="eth0" netns="/var/run/netns/cni-b4bebaa0-29de-d0e0-68c6-b0f59edb10c3" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.855 [INFO][3872] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.855 [INFO][3872] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.958 [INFO][3970] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:30.958 [INFO][3970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:31.008 [INFO][3970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:31.024 [WARNING][3970] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:31.025 [INFO][3970] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:31.033 [INFO][3970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.046952 containerd[1451]: 2026-03-06 01:43:31.042 [INFO][3872] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:31.047447 containerd[1451]: time="2026-03-06T01:43:31.047267142Z" level=info msg="TearDown network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\" successfully" Mar 6 01:43:31.047447 containerd[1451]: time="2026-03-06T01:43:31.047284865Z" level=info msg="StopPodSandbox for \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\" returns successfully" Mar 6 01:43:31.049926 containerd[1451]: time="2026-03-06T01:43:31.049900395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-jnpd7,Uid:00782c1b-bef0-48dd-8d89-f3e72a842b74,Namespace:calico-system,Attempt:1,}" Mar 6 01:43:31.308067 systemd-networkd[1375]: cali2528dd2650e: Link UP Mar 6 01:43:31.308450 systemd-networkd[1375]: cali2528dd2650e: Gained carrier Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.041 [ERROR][3985] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.082 [INFO][3985] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--df657-eth0 csi-node-driver- calico-system 977c9795-dcad-4a6a-8717-7b63d6db97ee 946 0 2026-03-06 01:43:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-df657 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2528dd2650e [] [] }} ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.082 [INFO][3985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.160 [INFO][4051] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" HandleID="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.182 [INFO][4051] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" HandleID="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Workload="localhost-k8s-csi--node--driver--df657-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-df657", "timestamp":"2026-03-06 01:43:31.160450806 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002189a0)} Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.182 [INFO][4051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.183 [INFO][4051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.183 [INFO][4051] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.188 [INFO][4051] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.199 [INFO][4051] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.213 [INFO][4051] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.218 [INFO][4051] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.222 [INFO][4051] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.222 [INFO][4051] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.225 [INFO][4051] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5 Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.237 [INFO][4051] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.253 [INFO][4051] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.254 [INFO][4051] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" host="localhost" Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.254 [INFO][4051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.356530 containerd[1451]: 2026-03-06 01:43:31.254 [INFO][4051] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" HandleID="k8s-pod-network.bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:31.357502 containerd[1451]: 2026-03-06 01:43:31.262 [INFO][3985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--df657-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977c9795-dcad-4a6a-8717-7b63d6db97ee", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-df657", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528dd2650e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.357502 containerd[1451]: 2026-03-06 01:43:31.263 [INFO][3985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:31.357502 containerd[1451]: 2026-03-06 01:43:31.264 [INFO][3985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2528dd2650e ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:31.357502 containerd[1451]: 2026-03-06 01:43:31.308 [INFO][3985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:31.357502 containerd[1451]: 2026-03-06 01:43:31.309 [INFO][3985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--df657-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977c9795-dcad-4a6a-8717-7b63d6db97ee", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5", Pod:"csi-node-driver-df657", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528dd2650e", MAC:"e2:22:3d:c8:2f:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.357502 containerd[1451]: 2026-03-06 01:43:31.341 [INFO][3985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5" Namespace="calico-system" Pod="csi-node-driver-df657" WorkloadEndpoint="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:31.429184 systemd-networkd[1375]: cali1753f41fd22: Link UP Mar 6 01:43:31.431945 systemd-networkd[1375]: cali1753f41fd22: Gained carrier Mar 6 01:43:31.445318 containerd[1451]: time="2026-03-06T01:43:31.444455292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:31.445318 containerd[1451]: time="2026-03-06T01:43:31.444612285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:31.445318 containerd[1451]: time="2026-03-06T01:43:31.444635687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.445318 containerd[1451]: time="2026-03-06T01:43:31.444911961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.450846 kubelet[2584]: I0306 01:43:31.450093 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:31.460634 systemd[1]: Removed slice kubepods-besteffort-poda5517b25_89f2_4a90_a131_f286bdda3fd7.slice - libcontainer container kubepods-besteffort-poda5517b25_89f2_4a90_a131_f286bdda3fd7.slice. Mar 6 01:43:31.502545 systemd[1]: Started cri-containerd-bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5.scope - libcontainer container bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5. Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.042 [ERROR][3996] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.076 [INFO][3996] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0 calico-kube-controllers-6c6dbb68d8- calico-system 29925f97-ffaa-4463-8aba-6f0558d0f689 948 0 2026-03-06 01:43:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c6dbb68d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c6dbb68d8-nnps9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1753f41fd22 [] [] }} ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.076 [INFO][3996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.188 [INFO][4035] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" HandleID="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.209 [INFO][4035] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" HandleID="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c6dbb68d8-nnps9", "timestamp":"2026-03-06 01:43:31.188854654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000168840)} Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.209 [INFO][4035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.254 [INFO][4035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.255 [INFO][4035] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.293 [INFO][4035] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.343 [INFO][4035] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.366 [INFO][4035] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.375 [INFO][4035] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.380 [INFO][4035] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.380 [INFO][4035] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.383 [INFO][4035] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43 Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.391 [INFO][4035] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.408 [INFO][4035] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.408 [INFO][4035] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" host="localhost" Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.408 [INFO][4035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.517451 containerd[1451]: 2026-03-06 01:43:31.408 [INFO][4035] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" HandleID="k8s-pod-network.8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:31.522430 containerd[1451]: 2026-03-06 01:43:31.420 [INFO][3996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0", GenerateName:"calico-kube-controllers-6c6dbb68d8-", Namespace:"calico-system", SelfLink:"", UID:"29925f97-ffaa-4463-8aba-6f0558d0f689", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6dbb68d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c6dbb68d8-nnps9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1753f41fd22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.522430 containerd[1451]: 2026-03-06 01:43:31.420 [INFO][3996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:31.522430 containerd[1451]: 2026-03-06 01:43:31.421 [INFO][3996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1753f41fd22 ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:31.522430 containerd[1451]: 2026-03-06 01:43:31.442 [INFO][3996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:31.522430 containerd[1451]: 2026-03-06 01:43:31.475 [INFO][3996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0", GenerateName:"calico-kube-controllers-6c6dbb68d8-", Namespace:"calico-system", SelfLink:"", UID:"29925f97-ffaa-4463-8aba-6f0558d0f689", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6dbb68d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43", Pod:"calico-kube-controllers-6c6dbb68d8-nnps9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1753f41fd22", MAC:"72:27:ed:e7:1a:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.522430 containerd[1451]: 2026-03-06 01:43:31.507 [INFO][3996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43" Namespace="calico-system" Pod="calico-kube-controllers-6c6dbb68d8-nnps9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:31.581137 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:31.596221 systemd[1]: Created slice kubepods-besteffort-podbf7b2a21_1704_4107_a953_bfca16b9f900.slice - libcontainer container kubepods-besteffort-podbf7b2a21_1704_4107_a953_bfca16b9f900.slice. Mar 6 01:43:31.605513 systemd-networkd[1375]: calid6955b7da70: Link UP Mar 6 01:43:31.619000 systemd-networkd[1375]: calid6955b7da70: Gained carrier Mar 6 01:43:31.629362 containerd[1451]: time="2026-03-06T01:43:31.629253848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-df657,Uid:977c9795-dcad-4a6a-8717-7b63d6db97ee,Namespace:calico-system,Attempt:1,} returns sandbox id \"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5\"" Mar 6 01:43:31.633489 containerd[1451]: time="2026-03-06T01:43:31.633215340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 01:43:31.644493 kubelet[2584]: I0306 01:43:31.644410 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7b2a21-1704-4107-a953-bfca16b9f900-whisker-ca-bundle\") pod \"whisker-f7cb5c45f-dpzcb\" (UID: \"bf7b2a21-1704-4107-a953-bfca16b9f900\") " pod="calico-system/whisker-f7cb5c45f-dpzcb" Mar 6 01:43:31.644641 kubelet[2584]: I0306 01:43:31.644510 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bf7b2a21-1704-4107-a953-bfca16b9f900-nginx-config\") pod \"whisker-f7cb5c45f-dpzcb\" (UID: \"bf7b2a21-1704-4107-a953-bfca16b9f900\") " pod="calico-system/whisker-f7cb5c45f-dpzcb" Mar 6 01:43:31.644641 kubelet[2584]: I0306 01:43:31.644561 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85sjn\" (UniqueName: \"kubernetes.io/projected/bf7b2a21-1704-4107-a953-bfca16b9f900-kube-api-access-85sjn\") pod \"whisker-f7cb5c45f-dpzcb\" (UID: \"bf7b2a21-1704-4107-a953-bfca16b9f900\") " pod="calico-system/whisker-f7cb5c45f-dpzcb" Mar 6 01:43:31.644641 kubelet[2584]: I0306 01:43:31.644598 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bf7b2a21-1704-4107-a953-bfca16b9f900-whisker-backend-key-pair\") pod \"whisker-f7cb5c45f-dpzcb\" (UID: \"bf7b2a21-1704-4107-a953-bfca16b9f900\") " pod="calico-system/whisker-f7cb5c45f-dpzcb" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.173 [ERROR][4015] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.198 [INFO][4015] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0 coredns-674b8bbfcf- kube-system 7c745590-fa59-4b3b-8745-5a7c8ee1d2b2 949 0 2026-03-06 01:42:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-b5xzx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid6955b7da70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.198 [INFO][4015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.325 [INFO][4098] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" HandleID="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.353 [INFO][4098] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" HandleID="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003eda70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-b5xzx", "timestamp":"2026-03-06 01:43:31.325890819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004c5600)} Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.353 [INFO][4098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.409 [INFO][4098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.410 [INFO][4098] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.428 [INFO][4098] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.485 [INFO][4098] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.502 [INFO][4098] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.515 [INFO][4098] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.522 [INFO][4098] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.523 [INFO][4098] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.528 [INFO][4098] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3 Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.538 [INFO][4098] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.559 [INFO][4098] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.563 [INFO][4098] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" host="localhost" Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.567 [INFO][4098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.653110 containerd[1451]: 2026-03-06 01:43:31.568 [INFO][4098] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" HandleID="k8s-pod-network.d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:31.656886 containerd[1451]: 2026-03-06 01:43:31.594 [INFO][4015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c745590-fa59-4b3b-8745-5a7c8ee1d2b2", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-b5xzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6955b7da70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.656886 containerd[1451]: 2026-03-06 01:43:31.597 [INFO][4015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:31.656886 containerd[1451]: 2026-03-06 01:43:31.597 [INFO][4015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6955b7da70 ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:31.656886 containerd[1451]: 2026-03-06 01:43:31.623 [INFO][4015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:31.656886 containerd[1451]: 2026-03-06 01:43:31.625 [INFO][4015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c745590-fa59-4b3b-8745-5a7c8ee1d2b2", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3", Pod:"coredns-674b8bbfcf-b5xzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6955b7da70", MAC:"da:bb:db:fc:6d:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.656886 containerd[1451]: 2026-03-06 01:43:31.646 [INFO][4015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3" Namespace="kube-system" Pod="coredns-674b8bbfcf-b5xzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:31.657177 kubelet[2584]: I0306 01:43:31.653449 2584 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5517b25-89f2-4a90-a131-f286bdda3fd7" path="/var/lib/kubelet/pods/a5517b25-89f2-4a90-a131-f286bdda3fd7/volumes" Mar 6 01:43:31.658806 containerd[1451]: time="2026-03-06T01:43:31.655930299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:31.658806 containerd[1451]: time="2026-03-06T01:43:31.656022730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:31.658806 containerd[1451]: time="2026-03-06T01:43:31.656045623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.658806 containerd[1451]: time="2026-03-06T01:43:31.656176467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.705224 systemd[1]: Started cri-containerd-8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43.scope - libcontainer container 8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43. Mar 6 01:43:31.721370 containerd[1451]: time="2026-03-06T01:43:31.720614974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:31.721370 containerd[1451]: time="2026-03-06T01:43:31.720687900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:31.721370 containerd[1451]: time="2026-03-06T01:43:31.720863787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.721370 containerd[1451]: time="2026-03-06T01:43:31.721040364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.722948 systemd-networkd[1375]: cali2e80e7f6bfb: Link UP Mar 6 01:43:31.724480 systemd-networkd[1375]: cali2e80e7f6bfb: Gained carrier Mar 6 01:43:31.732652 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.160 [ERROR][4011] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.193 [INFO][4011] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0 coredns-674b8bbfcf- kube-system dff04dd3-ef84-4619-a71a-c275e3897a95 945 0 2026-03-06 01:42:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nh5sh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2e80e7f6bfb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.193 [INFO][4011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.332 [INFO][4099] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" HandleID="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.355 [INFO][4099] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" HandleID="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005f40a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nh5sh", "timestamp":"2026-03-06 01:43:31.332522855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000b2580)} Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.356 [INFO][4099] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.565 [INFO][4099] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.566 [INFO][4099] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.579 [INFO][4099] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.613 [INFO][4099] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.640 [INFO][4099] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.651 [INFO][4099] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.656 [INFO][4099] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.657 [INFO][4099] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.661 [INFO][4099] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893 Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.675 [INFO][4099] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.697 [INFO][4099] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.697 [INFO][4099] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" host="localhost" Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.697 [INFO][4099] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.763338 containerd[1451]: 2026-03-06 01:43:31.697 [INFO][4099] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" HandleID="k8s-pod-network.fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:31.765116 containerd[1451]: 2026-03-06 01:43:31.710 [INFO][4011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dff04dd3-ef84-4619-a71a-c275e3897a95", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nh5sh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e80e7f6bfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.765116 containerd[1451]: 2026-03-06 01:43:31.712 [INFO][4011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:31.765116 containerd[1451]: 2026-03-06 01:43:31.712 [INFO][4011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e80e7f6bfb ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:31.765116 containerd[1451]: 2026-03-06 01:43:31.725 [INFO][4011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:31.765116 containerd[1451]: 2026-03-06 01:43:31.726 [INFO][4011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dff04dd3-ef84-4619-a71a-c275e3897a95", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893", Pod:"coredns-674b8bbfcf-nh5sh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e80e7f6bfb", MAC:"3e:2b:a7:00:79:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.765116 containerd[1451]: 2026-03-06 01:43:31.753 [INFO][4011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893" Namespace="kube-system" Pod="coredns-674b8bbfcf-nh5sh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:31.813996 systemd[1]: Started cri-containerd-d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3.scope - libcontainer container d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3. Mar 6 01:43:31.877396 systemd-networkd[1375]: cali428609661cc: Link UP Mar 6 01:43:31.891242 systemd-networkd[1375]: cali428609661cc: Gained carrier Mar 6 01:43:31.893091 systemd[1]: run-netns-cni\x2d05bb082d\x2d4696\x2d88f3\x2d7830\x2dc92cadeef9ee.mount: Deactivated successfully. Mar 6 01:43:31.893236 systemd[1]: run-netns-cni\x2d5b187fbc\x2d608e\x2de748\x2d88a9\x2dc0ef1eb196bb.mount: Deactivated successfully. Mar 6 01:43:31.893395 systemd[1]: run-netns-cni\x2daed2eb61\x2d3ecc\x2de5b1\x2dcbd5\x2df3ec6f017511.mount: Deactivated successfully. Mar 6 01:43:31.893504 systemd[1]: run-netns-cni\x2db4bebaa0\x2d29de\x2dd0e0\x2d68c6\x2db0f59edb10c3.mount: Deactivated successfully. Mar 6 01:43:31.893604 systemd[1]: run-netns-cni\x2db0cd67cb\x2d94c2\x2d2ab5\x2d6925\x2df00adc2aeb56.mount: Deactivated successfully. Mar 6 01:43:31.906675 containerd[1451]: time="2026-03-06T01:43:31.906149579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7cb5c45f-dpzcb,Uid:bf7b2a21-1704-4107-a953-bfca16b9f900,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:31.929838 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:31.989241 containerd[1451]: time="2026-03-06T01:43:31.989003139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:31.989241 containerd[1451]: time="2026-03-06T01:43:31.989103265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:31.989241 containerd[1451]: time="2026-03-06T01:43:31.989139383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.990876 containerd[1451]: time="2026-03-06T01:43:31.989260367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.225 [ERROR][4067] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.245 [INFO][4067] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0 calico-apiserver-8687f94789- calico-system 00782c1b-bef0-48dd-8d89-f3e72a842b74 952 0 2026-03-06 01:43:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8687f94789 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8687f94789-jnpd7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali428609661cc [] [] }} ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.245 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.401 [INFO][4121] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" HandleID="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.421 [INFO][4121] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" HandleID="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8687f94789-jnpd7", "timestamp":"2026-03-06 01:43:31.401282755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004cf340)} Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.421 [INFO][4121] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.699 [INFO][4121] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.700 [INFO][4121] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.709 [INFO][4121] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.722 [INFO][4121] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.745 [INFO][4121] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.756 [INFO][4121] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.766 [INFO][4121] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.766 [INFO][4121] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.770 [INFO][4121] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.818 [INFO][4121] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.850 [INFO][4121] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.850 [INFO][4121] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" host="localhost" Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.850 [INFO][4121] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:31.994352 containerd[1451]: 2026-03-06 01:43:31.850 [INFO][4121] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" HandleID="k8s-pod-network.b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.995111 containerd[1451]: 2026-03-06 01:43:31.857 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"00782c1b-bef0-48dd-8d89-f3e72a842b74", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8687f94789-jnpd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali428609661cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.995111 containerd[1451]: 2026-03-06 01:43:31.857 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.995111 containerd[1451]: 2026-03-06 01:43:31.857 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali428609661cc ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.995111 containerd[1451]: 2026-03-06 01:43:31.913 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:31.995111 containerd[1451]: 2026-03-06 01:43:31.919 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"00782c1b-bef0-48dd-8d89-f3e72a842b74", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd", Pod:"calico-apiserver-8687f94789-jnpd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali428609661cc", MAC:"fa:5b:70:8b:29:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:31.995111 containerd[1451]: 2026-03-06 01:43:31.951 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd" Namespace="calico-system" Pod="calico-apiserver-8687f94789-jnpd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:32.073383 containerd[1451]: time="2026-03-06T01:43:32.073342598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c6dbb68d8-nnps9,Uid:29925f97-ffaa-4463-8aba-6f0558d0f689,Namespace:calico-system,Attempt:1,} returns sandbox id \"8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43\"" Mar 6 01:43:32.111064 systemd[1]: Started cri-containerd-fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893.scope - libcontainer container fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893. Mar 6 01:43:32.117528 systemd-networkd[1375]: calic5c189e8586: Link UP Mar 6 01:43:32.125317 systemd-networkd[1375]: calic5c189e8586: Gained carrier Mar 6 01:43:32.142938 containerd[1451]: time="2026-03-06T01:43:32.142598748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b5xzx,Uid:7c745590-fa59-4b3b-8745-5a7c8ee1d2b2,Namespace:kube-system,Attempt:1,} returns sandbox id \"d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3\"" Mar 6 01:43:32.153878 kubelet[2584]: E0306 01:43:32.152140 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:32.189035 containerd[1451]: time="2026-03-06T01:43:32.188579900Z" level=info msg="CreateContainer within sandbox \"d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:43:32.198519 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:32.255937 containerd[1451]: time="2026-03-06T01:43:32.251257179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:32.262100 containerd[1451]: time="2026-03-06T01:43:32.260014823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:32.262100 containerd[1451]: time="2026-03-06T01:43:32.260085665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.281 [ERROR][4061] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.342 [INFO][4061] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--w9fg5-eth0 goldmane-5b85766d88- calico-system 9d73777b-f4c0-4c9b-90d2-bd41b4633f25 951 0 2026-03-06 01:43:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-w9fg5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic5c189e8586 [] [] }} ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.343 [INFO][4061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.453 [INFO][4143] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" HandleID="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.475 [INFO][4143] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" HandleID="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-w9fg5", "timestamp":"2026-03-06 01:43:31.453327162 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000193080)} Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.475 [INFO][4143] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.854 [INFO][4143] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.854 [INFO][4143] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.871 [INFO][4143] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.924 [INFO][4143] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.956 [INFO][4143] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.976 [INFO][4143] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.992 [INFO][4143] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.992 [INFO][4143] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:31.998 [INFO][4143] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:32.019 [INFO][4143] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:32.050 [INFO][4143] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:32.050 [INFO][4143] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" host="localhost" Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:32.050 [INFO][4143] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:32.262100 containerd[1451]: 2026-03-06 01:43:32.050 [INFO][4143] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" HandleID="k8s-pod-network.568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:32.266993 containerd[1451]: 2026-03-06 01:43:32.102 [INFO][4061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--w9fg5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9d73777b-f4c0-4c9b-90d2-bd41b4633f25", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-w9fg5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5c189e8586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:32.266993 containerd[1451]: 2026-03-06 01:43:32.102 [INFO][4061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:32.266993 containerd[1451]: 2026-03-06 01:43:32.102 [INFO][4061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5c189e8586 ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:32.266993 containerd[1451]: 2026-03-06 01:43:32.127 [INFO][4061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:32.266993 containerd[1451]: 2026-03-06 01:43:32.128 [INFO][4061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--w9fg5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9d73777b-f4c0-4c9b-90d2-bd41b4633f25", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f", Pod:"goldmane-5b85766d88-w9fg5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5c189e8586", MAC:"f2:2a:31:3a:e0:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:32.266993 containerd[1451]: 2026-03-06 01:43:32.186 [INFO][4061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f" Namespace="calico-system" Pod="goldmane-5b85766d88-w9fg5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:32.280632 containerd[1451]: time="2026-03-06T01:43:32.280332542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.305952 containerd[1451]: time="2026-03-06T01:43:32.297673183Z" level=info msg="CreateContainer within sandbox \"d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b39cd553e91fea123b444229aa94a69d8c83289543677c9e1f219bc3b873ac52\"" Mar 6 01:43:32.312382 containerd[1451]: time="2026-03-06T01:43:32.311281221Z" level=info msg="StartContainer for \"b39cd553e91fea123b444229aa94a69d8c83289543677c9e1f219bc3b873ac52\"" Mar 6 01:43:32.356255 containerd[1451]: time="2026-03-06T01:43:32.355133356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nh5sh,Uid:dff04dd3-ef84-4619-a71a-c275e3897a95,Namespace:kube-system,Attempt:1,} returns sandbox id \"fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893\"" Mar 6 01:43:32.357860 kubelet[2584]: E0306 01:43:32.357361 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:32.377426 containerd[1451]: time="2026-03-06T01:43:32.377320807Z" level=info msg="CreateContainer within sandbox \"fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:43:32.392357 systemd-networkd[1375]: calie2a5a21e814: Link UP Mar 6 01:43:32.417989 systemd-networkd[1375]: calie2a5a21e814: Gained carrier Mar 6 01:43:32.447332 containerd[1451]: time="2026-03-06T01:43:32.447014389Z" level=info msg="CreateContainer within sandbox \"fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"069ef4f87d8cca366c2a12e8f18a93d153a64c562c264e2bc90e47aa3a21b6f2\"" Mar 6 01:43:32.455838 containerd[1451]: time="2026-03-06T01:43:32.455206203Z" level=info msg="StartContainer for \"069ef4f87d8cca366c2a12e8f18a93d153a64c562c264e2bc90e47aa3a21b6f2\"" Mar 6 01:43:32.463265 systemd[1]: Started cri-containerd-b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd.scope - libcontainer container b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd. Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:31.219 [ERROR][4041] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:31.247 [INFO][4041] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0 calico-apiserver-8687f94789- calico-system a165b4e4-ca12-4318-93a2-9f1d976fbb5d 950 0 2026-03-06 01:43:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8687f94789 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8687f94789-q6vc5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie2a5a21e814 [] [] }} ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:31.247 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:31.426 [INFO][4118] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" HandleID="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:31.496 [INFO][4118] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" HandleID="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000666620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8687f94789-q6vc5", "timestamp":"2026-03-06 01:43:31.426841374 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006ca000)} Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:31.496 [INFO][4118] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.053 [INFO][4118] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.054 [INFO][4118] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.084 [INFO][4118] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.107 [INFO][4118] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.183 [INFO][4118] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.223 [INFO][4118] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.241 [INFO][4118] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.241 [INFO][4118] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.281 [INFO][4118] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665 Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.317 [INFO][4118] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.338 [INFO][4118] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.338 [INFO][4118] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" host="localhost" Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.338 [INFO][4118] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:32.509871 containerd[1451]: 2026-03-06 01:43:32.338 [INFO][4118] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" HandleID="k8s-pod-network.0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:32.510493 containerd[1451]: 2026-03-06 01:43:32.354 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"a165b4e4-ca12-4318-93a2-9f1d976fbb5d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8687f94789-q6vc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2a5a21e814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:32.510493 containerd[1451]: 2026-03-06 01:43:32.355 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:32.510493 containerd[1451]: 2026-03-06 01:43:32.355 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2a5a21e814 ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:32.510493 containerd[1451]: 2026-03-06 01:43:32.421 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:32.510493 containerd[1451]: 2026-03-06 01:43:32.423 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"a165b4e4-ca12-4318-93a2-9f1d976fbb5d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665", Pod:"calico-apiserver-8687f94789-q6vc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2a5a21e814", MAC:"ae:e4:d7:ee:4a:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:32.510493 containerd[1451]: 2026-03-06 01:43:32.473 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665" Namespace="calico-system" Pod="calico-apiserver-8687f94789-q6vc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:32.531917 containerd[1451]: time="2026-03-06T01:43:32.531740269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:32.536554 containerd[1451]: time="2026-03-06T01:43:32.534086118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:32.536554 containerd[1451]: time="2026-03-06T01:43:32.535398888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.536554 containerd[1451]: time="2026-03-06T01:43:32.536365234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.536869 systemd[1]: Started cri-containerd-b39cd553e91fea123b444229aa94a69d8c83289543677c9e1f219bc3b873ac52.scope - libcontainer container b39cd553e91fea123b444229aa94a69d8c83289543677c9e1f219bc3b873ac52. Mar 6 01:43:32.558003 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:32.635381 systemd[1]: Started cri-containerd-568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f.scope - libcontainer container 568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f. Mar 6 01:43:32.653066 systemd[1]: Started cri-containerd-069ef4f87d8cca366c2a12e8f18a93d153a64c562c264e2bc90e47aa3a21b6f2.scope - libcontainer container 069ef4f87d8cca366c2a12e8f18a93d153a64c562c264e2bc90e47aa3a21b6f2. Mar 6 01:43:32.714046 containerd[1451]: time="2026-03-06T01:43:32.711882639Z" level=info msg="StartContainer for \"b39cd553e91fea123b444229aa94a69d8c83289543677c9e1f219bc3b873ac52\" returns successfully" Mar 6 01:43:32.716149 containerd[1451]: time="2026-03-06T01:43:32.713459728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:32.716149 containerd[1451]: time="2026-03-06T01:43:32.713512516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:32.716149 containerd[1451]: time="2026-03-06T01:43:32.713537131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.716149 containerd[1451]: time="2026-03-06T01:43:32.713628612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.724977 containerd[1451]: time="2026-03-06T01:43:32.724935224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-jnpd7,Uid:00782c1b-bef0-48dd-8d89-f3e72a842b74,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd\"" Mar 6 01:43:32.742132 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:32.777206 systemd[1]: Started cri-containerd-0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665.scope - libcontainer container 0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665. Mar 6 01:43:32.778028 systemd-networkd[1375]: cali2e80e7f6bfb: Gained IPv6LL Mar 6 01:43:32.779337 containerd[1451]: time="2026-03-06T01:43:32.778476901Z" level=info msg="StartContainer for \"069ef4f87d8cca366c2a12e8f18a93d153a64c562c264e2bc90e47aa3a21b6f2\" returns successfully" Mar 6 01:43:32.818476 systemd-networkd[1375]: cali5b4b2d4c4e6: Link UP Mar 6 01:43:32.826337 systemd-networkd[1375]: cali5b4b2d4c4e6: Gained carrier Mar 6 01:43:32.832895 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.245 [ERROR][4379] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.310 [INFO][4379] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0 whisker-f7cb5c45f- calico-system bf7b2a21-1704-4107-a953-bfca16b9f900 978 0 2026-03-06 01:43:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f7cb5c45f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-f7cb5c45f-dpzcb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5b4b2d4c4e6 [] [] }} ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.310 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.578 [INFO][4499] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" HandleID="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Workload="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.635 [INFO][4499] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" HandleID="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Workload="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060f9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-f7cb5c45f-dpzcb", "timestamp":"2026-03-06 01:43:32.578074562 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005a4000)} Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.635 [INFO][4499] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.635 [INFO][4499] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.635 [INFO][4499] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.641 [INFO][4499] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.652 [INFO][4499] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.676 [INFO][4499] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.686 [INFO][4499] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.734 [INFO][4499] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.735 [INFO][4499] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.747 [INFO][4499] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.773 [INFO][4499] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.786 [INFO][4499] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.787 [INFO][4499] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" host="localhost" Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.787 [INFO][4499] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:32.889609 containerd[1451]: 2026-03-06 01:43:32.787 [INFO][4499] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" HandleID="k8s-pod-network.d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Workload="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" Mar 6 01:43:32.892443 containerd[1451]: 2026-03-06 01:43:32.793 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0", GenerateName:"whisker-f7cb5c45f-", Namespace:"calico-system", SelfLink:"", UID:"bf7b2a21-1704-4107-a953-bfca16b9f900", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7cb5c45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-f7cb5c45f-dpzcb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b4b2d4c4e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:32.892443 containerd[1451]: 2026-03-06 01:43:32.794 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" Mar 6 01:43:32.892443 containerd[1451]: 2026-03-06 01:43:32.794 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b4b2d4c4e6 ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" Mar 6 01:43:32.892443 containerd[1451]: 2026-03-06 01:43:32.831 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" Mar 6 01:43:32.892443 containerd[1451]: 2026-03-06 01:43:32.833 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0", GenerateName:"whisker-f7cb5c45f-", Namespace:"calico-system", SelfLink:"", UID:"bf7b2a21-1704-4107-a953-bfca16b9f900", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7cb5c45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c", Pod:"whisker-f7cb5c45f-dpzcb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b4b2d4c4e6", MAC:"02:a2:47:be:1c:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:32.892443 containerd[1451]: 2026-03-06 01:43:32.859 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c" Namespace="calico-system" Pod="whisker-f7cb5c45f-dpzcb" WorkloadEndpoint="localhost-k8s-whisker--f7cb5c45f--dpzcb-eth0" Mar 6 01:43:32.916440 containerd[1451]: time="2026-03-06T01:43:32.916267116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-w9fg5,Uid:9d73777b-f4c0-4c9b-90d2-bd41b4633f25,Namespace:calico-system,Attempt:1,} returns sandbox id \"568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f\"" Mar 6 01:43:32.961240 containerd[1451]: time="2026-03-06T01:43:32.961108627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:32.963978 containerd[1451]: time="2026-03-06T01:43:32.961597815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:32.963978 containerd[1451]: time="2026-03-06T01:43:32.961624134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.967312 containerd[1451]: time="2026-03-06T01:43:32.967007899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:32.988494 containerd[1451]: time="2026-03-06T01:43:32.988414372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8687f94789-q6vc5,Uid:a165b4e4-ca12-4318-93a2-9f1d976fbb5d,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665\"" Mar 6 01:43:33.024138 systemd[1]: Started cri-containerd-d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c.scope - libcontainer container d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c. Mar 6 01:43:33.053373 systemd-resolved[1378]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:33.107094 containerd[1451]: time="2026-03-06T01:43:33.106882406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7cb5c45f-dpzcb,Uid:bf7b2a21-1704-4107-a953-bfca16b9f900,Namespace:calico-system,Attempt:0,} returns sandbox id \"d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c\"" Mar 6 01:43:33.126485 containerd[1451]: time="2026-03-06T01:43:33.126351063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:33.127902 containerd[1451]: time="2026-03-06T01:43:33.127809193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 01:43:33.129581 containerd[1451]: time="2026-03-06T01:43:33.129486561Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:33.133237 containerd[1451]: time="2026-03-06T01:43:33.133139961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:33.134322 containerd[1451]: time="2026-03-06T01:43:33.134217774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.500966256s" Mar 6 01:43:33.134322 containerd[1451]: time="2026-03-06T01:43:33.134290250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 01:43:33.135850 containerd[1451]: time="2026-03-06T01:43:33.135676475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 01:43:33.141743 containerd[1451]: time="2026-03-06T01:43:33.141618870Z" level=info msg="CreateContainer within sandbox \"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 01:43:33.163586 systemd-networkd[1375]: cali428609661cc: Gained IPv6LL Mar 6 01:43:33.168973 containerd[1451]: time="2026-03-06T01:43:33.168810488Z" level=info msg="CreateContainer within sandbox \"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5dd2169c4eb51a70eb5c7d6a53173af0e664ae24976235744ab6cc756fa8a1fe\"" Mar 6 01:43:33.170325 containerd[1451]: time="2026-03-06T01:43:33.170234520Z" level=info msg="StartContainer for \"5dd2169c4eb51a70eb5c7d6a53173af0e664ae24976235744ab6cc756fa8a1fe\"" Mar 6 01:43:33.212982 systemd[1]: Started cri-containerd-5dd2169c4eb51a70eb5c7d6a53173af0e664ae24976235744ab6cc756fa8a1fe.scope - libcontainer container 5dd2169c4eb51a70eb5c7d6a53173af0e664ae24976235744ab6cc756fa8a1fe. Mar 6 01:43:33.258006 containerd[1451]: time="2026-03-06T01:43:33.254971381Z" level=info msg="StartContainer for \"5dd2169c4eb51a70eb5c7d6a53173af0e664ae24976235744ab6cc756fa8a1fe\" returns successfully" Mar 6 01:43:33.352250 systemd-networkd[1375]: cali2528dd2650e: Gained IPv6LL Mar 6 01:43:33.352863 systemd-networkd[1375]: cali1753f41fd22: Gained IPv6LL Mar 6 01:43:33.522856 kubelet[2584]: E0306 01:43:33.522482 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:33.541218 kubelet[2584]: E0306 01:43:33.541095 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:33.544271 systemd-networkd[1375]: calid6955b7da70: Gained IPv6LL Mar 6 01:43:33.550886 kubelet[2584]: I0306 01:43:33.550331 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b5xzx" podStartSLOduration=34.550307744 podStartE2EDuration="34.550307744s" podCreationTimestamp="2026-03-06 01:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:43:33.544640009 +0000 UTC m=+42.022119243" watchObservedRunningTime="2026-03-06 01:43:33.550307744 +0000 UTC m=+42.027786937" Mar 6 01:43:33.642520 kubelet[2584]: I0306 01:43:33.642463 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nh5sh" podStartSLOduration=34.642444902 podStartE2EDuration="34.642444902s" podCreationTimestamp="2026-03-06 01:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:43:33.599621911 +0000 UTC m=+42.077101114" watchObservedRunningTime="2026-03-06 01:43:33.642444902 +0000 UTC m=+42.119924094" Mar 6 01:43:33.868948 systemd-networkd[1375]: calic5c189e8586: Gained IPv6LL Mar 6 01:43:33.992268 systemd-networkd[1375]: calie2a5a21e814: Gained IPv6LL Mar 6 01:43:33.996560 systemd-networkd[1375]: cali5b4b2d4c4e6: Gained IPv6LL Mar 6 01:43:34.591984 kubelet[2584]: E0306 01:43:34.591362 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:34.592578 kubelet[2584]: E0306 01:43:34.592139 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:34.986212 containerd[1451]: time="2026-03-06T01:43:34.986114112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:34.987409 containerd[1451]: time="2026-03-06T01:43:34.987319493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 01:43:34.989816 containerd[1451]: time="2026-03-06T01:43:34.989562320Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:35.052214 containerd[1451]: time="2026-03-06T01:43:35.052055015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:35.053123 containerd[1451]: time="2026-03-06T01:43:35.053004893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.916947039s" Mar 6 01:43:35.053123 containerd[1451]: time="2026-03-06T01:43:35.053084430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 01:43:35.058156 containerd[1451]: time="2026-03-06T01:43:35.057970564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:43:35.130518 containerd[1451]: time="2026-03-06T01:43:35.130482532Z" level=info msg="CreateContainer within sandbox \"8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 01:43:35.155155 containerd[1451]: time="2026-03-06T01:43:35.155037438Z" level=info msg="CreateContainer within sandbox \"8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"23125b6c10978e06b6093fbe75fb78540c3cbdff8079b0f633d5f3394ec1caa1\"" Mar 6 01:43:35.157890 containerd[1451]: time="2026-03-06T01:43:35.156491204Z" level=info msg="StartContainer for \"23125b6c10978e06b6093fbe75fb78540c3cbdff8079b0f633d5f3394ec1caa1\"" Mar 6 01:43:35.223069 systemd[1]: Started cri-containerd-23125b6c10978e06b6093fbe75fb78540c3cbdff8079b0f633d5f3394ec1caa1.scope - libcontainer container 23125b6c10978e06b6093fbe75fb78540c3cbdff8079b0f633d5f3394ec1caa1. Mar 6 01:43:35.294540 containerd[1451]: time="2026-03-06T01:43:35.294363162Z" level=info msg="StartContainer for \"23125b6c10978e06b6093fbe75fb78540c3cbdff8079b0f633d5f3394ec1caa1\" returns successfully" Mar 6 01:43:35.621518 kubelet[2584]: E0306 01:43:35.621435 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:35.624440 kubelet[2584]: E0306 01:43:35.624315 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:35.652496 kubelet[2584]: I0306 01:43:35.650592 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c6dbb68d8-nnps9" podStartSLOduration=18.671791168 podStartE2EDuration="21.650573108s" podCreationTimestamp="2026-03-06 01:43:14 +0000 UTC" firstStartedPulling="2026-03-06 01:43:32.076349105 +0000 UTC m=+40.553828298" lastFinishedPulling="2026-03-06 01:43:35.055130885 +0000 UTC m=+43.532610238" observedRunningTime="2026-03-06 01:43:35.646597888 +0000 UTC m=+44.124077101" watchObservedRunningTime="2026-03-06 01:43:35.650573108 +0000 UTC m=+44.128052311" Mar 6 01:43:36.563631 kubelet[2584]: I0306 01:43:36.563568 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:36.564487 kubelet[2584]: E0306 01:43:36.564210 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:36.626587 kubelet[2584]: E0306 01:43:36.626382 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:36.809262 containerd[1451]: time="2026-03-06T01:43:36.809173835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:36.810670 containerd[1451]: time="2026-03-06T01:43:36.810572756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 01:43:36.812883 containerd[1451]: time="2026-03-06T01:43:36.812817197Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:36.818260 containerd[1451]: time="2026-03-06T01:43:36.818125398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:36.819591 containerd[1451]: time="2026-03-06T01:43:36.819480530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.761417996s" Mar 6 01:43:36.819591 containerd[1451]: time="2026-03-06T01:43:36.819543677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:43:36.823149 containerd[1451]: time="2026-03-06T01:43:36.822953728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 01:43:36.829205 containerd[1451]: time="2026-03-06T01:43:36.829121075Z" level=info msg="CreateContainer within sandbox \"b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:43:36.850513 containerd[1451]: time="2026-03-06T01:43:36.850328515Z" level=info msg="CreateContainer within sandbox \"b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd5d955f113c95cd502ba58224749e06e9d0785a80de0f6094b5385be4d2dcb7\"" Mar 6 01:43:36.853240 containerd[1451]: time="2026-03-06T01:43:36.853072598Z" level=info msg="StartContainer for \"bd5d955f113c95cd502ba58224749e06e9d0785a80de0f6094b5385be4d2dcb7\"" Mar 6 01:43:36.918112 systemd[1]: Started cri-containerd-bd5d955f113c95cd502ba58224749e06e9d0785a80de0f6094b5385be4d2dcb7.scope - libcontainer container bd5d955f113c95cd502ba58224749e06e9d0785a80de0f6094b5385be4d2dcb7. Mar 6 01:43:37.038242 containerd[1451]: time="2026-03-06T01:43:37.038139525Z" level=info msg="StartContainer for \"bd5d955f113c95cd502ba58224749e06e9d0785a80de0f6094b5385be4d2dcb7\" returns successfully" Mar 6 01:43:37.171874 kernel: calico-node[4957]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 6 01:43:38.094056 systemd-networkd[1375]: vxlan.calico: Link UP Mar 6 01:43:38.094069 systemd-networkd[1375]: vxlan.calico: Gained carrier Mar 6 01:43:38.679335 kubelet[2584]: I0306 01:43:38.679285 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:39.412579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920063334.mount: Deactivated successfully. Mar 6 01:43:39.880100 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Mar 6 01:43:39.913129 containerd[1451]: time="2026-03-06T01:43:39.913002963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:39.914161 containerd[1451]: time="2026-03-06T01:43:39.913989347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 01:43:39.915857 containerd[1451]: time="2026-03-06T01:43:39.915740645Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:39.929288 containerd[1451]: time="2026-03-06T01:43:39.929183729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:39.930241 containerd[1451]: time="2026-03-06T01:43:39.930149646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.107166372s" Mar 6 01:43:39.930241 containerd[1451]: time="2026-03-06T01:43:39.930200038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 01:43:39.939657 containerd[1451]: time="2026-03-06T01:43:39.939628836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:43:39.943889 containerd[1451]: time="2026-03-06T01:43:39.943833305Z" level=info msg="CreateContainer within sandbox \"568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 01:43:39.965176 containerd[1451]: time="2026-03-06T01:43:39.965064753Z" level=info msg="CreateContainer within sandbox \"568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"02ad777cc851adf587a4b75cb3b37201335398507b36b6ff11421790c80d4559\"" Mar 6 01:43:39.965998 containerd[1451]: time="2026-03-06T01:43:39.965907701Z" level=info msg="StartContainer for \"02ad777cc851adf587a4b75cb3b37201335398507b36b6ff11421790c80d4559\"" Mar 6 01:43:40.060941 systemd[1]: Started cri-containerd-02ad777cc851adf587a4b75cb3b37201335398507b36b6ff11421790c80d4559.scope - libcontainer container 02ad777cc851adf587a4b75cb3b37201335398507b36b6ff11421790c80d4559. Mar 6 01:43:40.075504 containerd[1451]: time="2026-03-06T01:43:40.075249336Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:40.077001 containerd[1451]: time="2026-03-06T01:43:40.076704056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 01:43:40.079618 containerd[1451]: time="2026-03-06T01:43:40.079468078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 139.809176ms" Mar 6 01:43:40.079618 containerd[1451]: time="2026-03-06T01:43:40.079545752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:43:40.082420 containerd[1451]: time="2026-03-06T01:43:40.082384082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 01:43:40.089881 containerd[1451]: time="2026-03-06T01:43:40.089811802Z" level=info msg="CreateContainer within sandbox \"0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:43:40.154247 containerd[1451]: time="2026-03-06T01:43:40.154091448Z" level=info msg="StartContainer for \"02ad777cc851adf587a4b75cb3b37201335398507b36b6ff11421790c80d4559\" returns successfully" Mar 6 01:43:40.167221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630277212.mount: Deactivated successfully. Mar 6 01:43:40.190472 containerd[1451]: time="2026-03-06T01:43:40.190435154Z" level=info msg="CreateContainer within sandbox \"0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"13bfd56cee9e345baa4af49669257d54851d53f2b270732981400fbaeb95bdb1\"" Mar 6 01:43:40.191849 containerd[1451]: time="2026-03-06T01:43:40.191738959Z" level=info msg="StartContainer for \"13bfd56cee9e345baa4af49669257d54851d53f2b270732981400fbaeb95bdb1\"" Mar 6 01:43:40.235048 systemd[1]: Started cri-containerd-13bfd56cee9e345baa4af49669257d54851d53f2b270732981400fbaeb95bdb1.scope - libcontainer container 13bfd56cee9e345baa4af49669257d54851d53f2b270732981400fbaeb95bdb1. Mar 6 01:43:40.305551 containerd[1451]: time="2026-03-06T01:43:40.298554429Z" level=info msg="StartContainer for \"13bfd56cee9e345baa4af49669257d54851d53f2b270732981400fbaeb95bdb1\" returns successfully" Mar 6 01:43:40.871479 kubelet[2584]: I0306 01:43:40.871403 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-w9fg5" podStartSLOduration=20.853042393 podStartE2EDuration="27.871384403s" podCreationTimestamp="2026-03-06 01:43:13 +0000 UTC" firstStartedPulling="2026-03-06 01:43:32.919607623 +0000 UTC m=+41.397086816" lastFinishedPulling="2026-03-06 01:43:39.937949633 +0000 UTC m=+48.415428826" observedRunningTime="2026-03-06 01:43:40.870848115 +0000 UTC m=+49.348327318" watchObservedRunningTime="2026-03-06 01:43:40.871384403 +0000 UTC m=+49.348863595" Mar 6 01:43:40.876151 kubelet[2584]: I0306 01:43:40.871581 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8687f94789-jnpd7" podStartSLOduration=23.780962087 podStartE2EDuration="27.871576611s" podCreationTimestamp="2026-03-06 01:43:13 +0000 UTC" firstStartedPulling="2026-03-06 01:43:32.731127822 +0000 UTC m=+41.208607025" lastFinishedPulling="2026-03-06 01:43:36.821742355 +0000 UTC m=+45.299221549" observedRunningTime="2026-03-06 01:43:37.686549822 +0000 UTC m=+46.164029015" watchObservedRunningTime="2026-03-06 01:43:40.871576611 +0000 UTC m=+49.349055804" Mar 6 01:43:41.147218 containerd[1451]: time="2026-03-06T01:43:41.146848778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:41.149344 containerd[1451]: time="2026-03-06T01:43:41.149038014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 01:43:41.150939 containerd[1451]: time="2026-03-06T01:43:41.150858448Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:41.155306 containerd[1451]: time="2026-03-06T01:43:41.155256720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:41.156878 containerd[1451]: time="2026-03-06T01:43:41.156624466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.074190532s" Mar 6 01:43:41.156878 containerd[1451]: time="2026-03-06T01:43:41.156745902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 01:43:41.160739 containerd[1451]: time="2026-03-06T01:43:41.160651637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 01:43:41.165706 containerd[1451]: time="2026-03-06T01:43:41.165608248Z" level=info msg="CreateContainer within sandbox \"d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 01:43:41.197142 containerd[1451]: time="2026-03-06T01:43:41.197021923Z" level=info msg="CreateContainer within sandbox \"d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f143bf609e431a5e9c9c1d0e40bf52365107ecdf2123f086bc1d9f111bdfee78\"" Mar 6 01:43:41.197829 containerd[1451]: time="2026-03-06T01:43:41.197738593Z" level=info msg="StartContainer for \"f143bf609e431a5e9c9c1d0e40bf52365107ecdf2123f086bc1d9f111bdfee78\"" Mar 6 01:43:41.240071 systemd[1]: Started cri-containerd-f143bf609e431a5e9c9c1d0e40bf52365107ecdf2123f086bc1d9f111bdfee78.scope - libcontainer container f143bf609e431a5e9c9c1d0e40bf52365107ecdf2123f086bc1d9f111bdfee78. Mar 6 01:43:41.304215 containerd[1451]: time="2026-03-06T01:43:41.304108722Z" level=info msg="StartContainer for \"f143bf609e431a5e9c9c1d0e40bf52365107ecdf2123f086bc1d9f111bdfee78\" returns successfully" Mar 6 01:43:41.341591 kubelet[2584]: I0306 01:43:41.341453 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:41.507858 kubelet[2584]: I0306 01:43:41.507514 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8687f94789-q6vc5" podStartSLOduration=21.418018031 podStartE2EDuration="28.507496783s" podCreationTimestamp="2026-03-06 01:43:13 +0000 UTC" firstStartedPulling="2026-03-06 01:43:32.992144424 +0000 UTC m=+41.469623617" lastFinishedPulling="2026-03-06 01:43:40.081623176 +0000 UTC m=+48.559102369" observedRunningTime="2026-03-06 01:43:40.946050134 +0000 UTC m=+49.423529327" watchObservedRunningTime="2026-03-06 01:43:41.507496783 +0000 UTC m=+49.984975986" Mar 6 01:43:41.707434 kubelet[2584]: I0306 01:43:41.707361 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:42.049886 systemd[1]: run-containerd-runc-k8s.io-f143bf609e431a5e9c9c1d0e40bf52365107ecdf2123f086bc1d9f111bdfee78-runc.jnIYjJ.mount: Deactivated successfully. Mar 6 01:43:43.187938 containerd[1451]: time="2026-03-06T01:43:43.187824571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:43.189409 containerd[1451]: time="2026-03-06T01:43:43.189316636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 01:43:43.190904 containerd[1451]: time="2026-03-06T01:43:43.190739063Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:43.194268 containerd[1451]: time="2026-03-06T01:43:43.194184769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:43.195397 containerd[1451]: time="2026-03-06T01:43:43.195321699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.034598719s" Mar 6 01:43:43.195397 containerd[1451]: time="2026-03-06T01:43:43.195387982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 01:43:43.197125 containerd[1451]: time="2026-03-06T01:43:43.197066501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 01:43:43.201891 containerd[1451]: time="2026-03-06T01:43:43.201857098Z" level=info msg="CreateContainer within sandbox \"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 01:43:43.241889 containerd[1451]: time="2026-03-06T01:43:43.241726732Z" level=info msg="CreateContainer within sandbox \"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a763dd68499ea0093cfc3e47d99cf9dba46b72b9b1cc182ed7bf92f32d387e5a\"" Mar 6 01:43:43.248629 containerd[1451]: time="2026-03-06T01:43:43.248508569Z" level=info msg="StartContainer for \"a763dd68499ea0093cfc3e47d99cf9dba46b72b9b1cc182ed7bf92f32d387e5a\"" Mar 6 01:43:43.300060 systemd[1]: Started cri-containerd-a763dd68499ea0093cfc3e47d99cf9dba46b72b9b1cc182ed7bf92f32d387e5a.scope - libcontainer container a763dd68499ea0093cfc3e47d99cf9dba46b72b9b1cc182ed7bf92f32d387e5a. Mar 6 01:43:43.349479 containerd[1451]: time="2026-03-06T01:43:43.349312936Z" level=info msg="StartContainer for \"a763dd68499ea0093cfc3e47d99cf9dba46b72b9b1cc182ed7bf92f32d387e5a\" returns successfully" Mar 6 01:43:43.733627 kubelet[2584]: I0306 01:43:43.730020 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-df657" podStartSLOduration=18.166085764 podStartE2EDuration="29.730005276s" podCreationTimestamp="2026-03-06 01:43:14 +0000 UTC" firstStartedPulling="2026-03-06 01:43:31.632486505 +0000 UTC m=+40.109965698" lastFinishedPulling="2026-03-06 01:43:43.196406017 +0000 UTC m=+51.673885210" observedRunningTime="2026-03-06 01:43:43.728569194 +0000 UTC m=+52.206048388" watchObservedRunningTime="2026-03-06 01:43:43.730005276 +0000 UTC m=+52.207484469" Mar 6 01:43:43.944558 kubelet[2584]: I0306 01:43:43.944456 2584 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 01:43:43.946201 kubelet[2584]: I0306 01:43:43.946086 2584 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 01:43:44.554067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647007245.mount: Deactivated successfully. Mar 6 01:43:44.636823 containerd[1451]: time="2026-03-06T01:43:44.635060432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:44.637624 containerd[1451]: time="2026-03-06T01:43:44.637504561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 01:43:44.639104 containerd[1451]: time="2026-03-06T01:43:44.639032885Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:44.641979 containerd[1451]: time="2026-03-06T01:43:44.641915739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:44.643270 containerd[1451]: time="2026-03-06T01:43:44.643200770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.446065071s" Mar 6 01:43:44.643270 containerd[1451]: time="2026-03-06T01:43:44.643265731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 01:43:44.650419 containerd[1451]: time="2026-03-06T01:43:44.650159138Z" level=info msg="CreateContainer within sandbox \"d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 01:43:44.672393 containerd[1451]: time="2026-03-06T01:43:44.672204739Z" level=info msg="CreateContainer within sandbox \"d00f706613ff4dc4b2e28ba45700c0f695e9e44ecf8270493669a6f49b59439c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"229576c6234aa65addb1dbe79080e925f6de0be4642437bb821bde369dc415be\"" Mar 6 01:43:44.676022 containerd[1451]: time="2026-03-06T01:43:44.675862145Z" level=info msg="StartContainer for \"229576c6234aa65addb1dbe79080e925f6de0be4642437bb821bde369dc415be\"" Mar 6 01:43:44.743025 systemd[1]: Started cri-containerd-229576c6234aa65addb1dbe79080e925f6de0be4642437bb821bde369dc415be.scope - libcontainer container 229576c6234aa65addb1dbe79080e925f6de0be4642437bb821bde369dc415be. Mar 6 01:43:44.841870 containerd[1451]: time="2026-03-06T01:43:44.840825430Z" level=info msg="StartContainer for \"229576c6234aa65addb1dbe79080e925f6de0be4642437bb821bde369dc415be\" returns successfully" Mar 6 01:43:45.771330 kubelet[2584]: I0306 01:43:45.771095 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f7cb5c45f-dpzcb" podStartSLOduration=3.2360276040000002 podStartE2EDuration="14.771068195s" podCreationTimestamp="2026-03-06 01:43:31 +0000 UTC" firstStartedPulling="2026-03-06 01:43:33.109604677 +0000 UTC m=+41.587083870" lastFinishedPulling="2026-03-06 01:43:44.644645267 +0000 UTC m=+53.122124461" observedRunningTime="2026-03-06 01:43:45.761641964 +0000 UTC m=+54.239121158" watchObservedRunningTime="2026-03-06 01:43:45.771068195 +0000 UTC m=+54.248547389" Mar 6 01:43:51.649361 containerd[1451]: time="2026-03-06T01:43:51.647005239Z" level=info msg="StopPodSandbox for \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\"" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.209 [WARNING][5533] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--df657-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977c9795-dcad-4a6a-8717-7b63d6db97ee", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5", Pod:"csi-node-driver-df657", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528dd2650e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.214 [INFO][5533] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.214 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" iface="eth0" netns="" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.217 [INFO][5533] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.219 [INFO][5533] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.458 [INFO][5541] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.458 [INFO][5541] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.458 [INFO][5541] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.494 [WARNING][5541] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.494 [INFO][5541] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.506 [INFO][5541] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:52.531098 containerd[1451]: 2026-03-06 01:43:52.519 [INFO][5533] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:52.566447 containerd[1451]: time="2026-03-06T01:43:52.566129332Z" level=info msg="TearDown network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\" successfully" Mar 6 01:43:52.566447 containerd[1451]: time="2026-03-06T01:43:52.566301193Z" level=info msg="StopPodSandbox for \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\" returns successfully" Mar 6 01:43:52.641203 containerd[1451]: time="2026-03-06T01:43:52.639338350Z" level=info msg="RemovePodSandbox for \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\"" Mar 6 01:43:52.641203 containerd[1451]: time="2026-03-06T01:43:52.640031530Z" level=info msg="Forcibly stopping sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\"" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.795 [WARNING][5559] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--df657-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977c9795-dcad-4a6a-8717-7b63d6db97ee", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf4de95f6d51ab6f2fe8c9eff8604ec9968c41f9677369d72fabf16a4ff3d7d5", Pod:"csi-node-driver-df657", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528dd2650e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.798 [INFO][5559] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.798 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" iface="eth0" netns="" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.798 [INFO][5559] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.798 [INFO][5559] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.909 [INFO][5567] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.910 [INFO][5567] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.910 [INFO][5567] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.950 [WARNING][5567] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.950 [INFO][5567] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" HandleID="k8s-pod-network.e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Workload="localhost-k8s-csi--node--driver--df657-eth0" Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.982 [INFO][5567] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:53.015490 containerd[1451]: 2026-03-06 01:43:52.998 [INFO][5559] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f" Mar 6 01:43:53.035968 containerd[1451]: time="2026-03-06T01:43:53.015535834Z" level=info msg="TearDown network for sandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\" successfully" Mar 6 01:43:53.068874 containerd[1451]: time="2026-03-06T01:43:53.068186529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:53.068874 containerd[1451]: time="2026-03-06T01:43:53.068331569Z" level=info msg="RemovePodSandbox \"e9767d0e7429a764483e367560b10b7af75911f36ef8bce35cc3f53d1f45a35f\" returns successfully" Mar 6 01:43:53.087192 containerd[1451]: time="2026-03-06T01:43:53.086565420Z" level=info msg="StopPodSandbox for \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\"" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.243 [WARNING][5583] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c745590-fa59-4b3b-8745-5a7c8ee1d2b2", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3", Pod:"coredns-674b8bbfcf-b5xzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6955b7da70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.247 [INFO][5583] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.247 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" iface="eth0" netns="" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.247 [INFO][5583] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.247 [INFO][5583] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.313 [INFO][5592] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.313 [INFO][5592] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.313 [INFO][5592] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.328 [WARNING][5592] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.328 [INFO][5592] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.331 [INFO][5592] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:53.339013 containerd[1451]: 2026-03-06 01:43:53.334 [INFO][5583] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.339013 containerd[1451]: time="2026-03-06T01:43:53.338916754Z" level=info msg="TearDown network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\" successfully" Mar 6 01:43:53.339013 containerd[1451]: time="2026-03-06T01:43:53.338942271Z" level=info msg="StopPodSandbox for \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\" returns successfully" Mar 6 01:43:53.340787 containerd[1451]: time="2026-03-06T01:43:53.340181561Z" level=info msg="RemovePodSandbox for \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\"" Mar 6 01:43:53.340787 containerd[1451]: time="2026-03-06T01:43:53.340262703Z" level=info msg="Forcibly stopping sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\"" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.410 [WARNING][5609] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c745590-fa59-4b3b-8745-5a7c8ee1d2b2", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0c219f5d63cece2d12a123ac2342251bac558681054fedc588b19f3174caff3", Pod:"coredns-674b8bbfcf-b5xzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6955b7da70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.411 [INFO][5609] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.411 [INFO][5609] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" iface="eth0" netns="" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.411 [INFO][5609] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.411 [INFO][5609] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.453 [INFO][5617] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.453 [INFO][5617] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.453 [INFO][5617] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.462 [WARNING][5617] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.462 [INFO][5617] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" HandleID="k8s-pod-network.df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Workload="localhost-k8s-coredns--674b8bbfcf--b5xzx-eth0" Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.466 [INFO][5617] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:53.475263 containerd[1451]: 2026-03-06 01:43:53.471 [INFO][5609] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40" Mar 6 01:43:53.476070 containerd[1451]: time="2026-03-06T01:43:53.475273919Z" level=info msg="TearDown network for sandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\" successfully" Mar 6 01:43:53.481687 containerd[1451]: time="2026-03-06T01:43:53.481369141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:53.481687 containerd[1451]: time="2026-03-06T01:43:53.481456664Z" level=info msg="RemovePodSandbox \"df1e34f93cc775d1e898588e5889d20f36a09fc7b6a1c64d7422340f5dc23e40\" returns successfully" Mar 6 01:43:53.482275 containerd[1451]: time="2026-03-06T01:43:53.482171889Z" level=info msg="StopPodSandbox for \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\"" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.537 [WARNING][5636] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"00782c1b-bef0-48dd-8d89-f3e72a842b74", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd", Pod:"calico-apiserver-8687f94789-jnpd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali428609661cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.538 [INFO][5636] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.538 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" iface="eth0" netns="" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.538 [INFO][5636] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.538 [INFO][5636] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.579 [INFO][5644] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.579 [INFO][5644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.579 [INFO][5644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.634 [WARNING][5644] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.634 [INFO][5644] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.637 [INFO][5644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:53.643128 containerd[1451]: 2026-03-06 01:43:53.640 [INFO][5636] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.644030 containerd[1451]: time="2026-03-06T01:43:53.643904293Z" level=info msg="TearDown network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\" successfully" Mar 6 01:43:53.644030 containerd[1451]: time="2026-03-06T01:43:53.643967511Z" level=info msg="StopPodSandbox for \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\" returns successfully" Mar 6 01:43:53.645048 containerd[1451]: time="2026-03-06T01:43:53.644997617Z" level=info msg="RemovePodSandbox for \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\"" Mar 6 01:43:53.645165 containerd[1451]: time="2026-03-06T01:43:53.645065763Z" level=info msg="Forcibly stopping sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\"" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.714 [WARNING][5662] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"00782c1b-bef0-48dd-8d89-f3e72a842b74", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0d7929604ebefb10b6f90cd05601863d9129e148a67e5199ad2b16eed9f6bfd", Pod:"calico-apiserver-8687f94789-jnpd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali428609661cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.714 [INFO][5662] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.714 [INFO][5662] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" iface="eth0" netns="" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.714 [INFO][5662] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.714 [INFO][5662] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.751 [INFO][5671] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.751 [INFO][5671] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.751 [INFO][5671] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.783 [WARNING][5671] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.783 [INFO][5671] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" HandleID="k8s-pod-network.db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Workload="localhost-k8s-calico--apiserver--8687f94789--jnpd7-eth0" Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.786 [INFO][5671] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:53.793726 containerd[1451]: 2026-03-06 01:43:53.789 [INFO][5662] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440" Mar 6 01:43:53.794503 containerd[1451]: time="2026-03-06T01:43:53.793817193Z" level=info msg="TearDown network for sandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\" successfully" Mar 6 01:43:53.800208 containerd[1451]: time="2026-03-06T01:43:53.799948650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:53.800208 containerd[1451]: time="2026-03-06T01:43:53.800037665Z" level=info msg="RemovePodSandbox \"db1697975c40d2d15809da6c4fc800d1b3b5589117e344ace80e3f6f30acf440\" returns successfully" Mar 6 01:43:53.800890 containerd[1451]: time="2026-03-06T01:43:53.800831561Z" level=info msg="StopPodSandbox for \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\"" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.860 [WARNING][5688] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" WorkloadEndpoint="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.860 [INFO][5688] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.860 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" iface="eth0" netns="" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.861 [INFO][5688] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.861 [INFO][5688] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.893 [INFO][5697] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.894 [INFO][5697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.894 [INFO][5697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.900 [WARNING][5697] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.900 [INFO][5697] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.902 [INFO][5697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:53.908170 containerd[1451]: 2026-03-06 01:43:53.905 [INFO][5688] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:53.908170 containerd[1451]: time="2026-03-06T01:43:53.908090458Z" level=info msg="TearDown network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\" successfully" Mar 6 01:43:53.908170 containerd[1451]: time="2026-03-06T01:43:53.908113892Z" level=info msg="StopPodSandbox for \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\" returns successfully" Mar 6 01:43:53.909011 containerd[1451]: time="2026-03-06T01:43:53.908708171Z" level=info msg="RemovePodSandbox for \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\"" Mar 6 01:43:53.909011 containerd[1451]: time="2026-03-06T01:43:53.908966551Z" level=info msg="Forcibly stopping sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\"" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.960 [WARNING][5714] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" WorkloadEndpoint="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.961 [INFO][5714] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.961 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" iface="eth0" netns="" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.961 [INFO][5714] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.961 [INFO][5714] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.996 [INFO][5722] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.996 [INFO][5722] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:53.996 [INFO][5722] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:54.005 [WARNING][5722] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:54.006 [INFO][5722] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" HandleID="k8s-pod-network.1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Workload="localhost-k8s-whisker--5585599fd--r7kcl-eth0" Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:54.008 [INFO][5722] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:54.015112 containerd[1451]: 2026-03-06 01:43:54.011 [INFO][5714] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12" Mar 6 01:43:54.016025 containerd[1451]: time="2026-03-06T01:43:54.015109836Z" level=info msg="TearDown network for sandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\" successfully" Mar 6 01:43:54.021346 containerd[1451]: time="2026-03-06T01:43:54.021283743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:54.021524 containerd[1451]: time="2026-03-06T01:43:54.021390732Z" level=info msg="RemovePodSandbox \"1eb20569878e78f040b5f1e3cfd248a2f8a61565c8d7db589905cbdf6555ff12\" returns successfully" Mar 6 01:43:54.022621 containerd[1451]: time="2026-03-06T01:43:54.022563155Z" level=info msg="StopPodSandbox for \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\"" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.083 [WARNING][5740] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0", GenerateName:"calico-kube-controllers-6c6dbb68d8-", Namespace:"calico-system", SelfLink:"", UID:"29925f97-ffaa-4463-8aba-6f0558d0f689", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6dbb68d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43", Pod:"calico-kube-controllers-6c6dbb68d8-nnps9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1753f41fd22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.084 [INFO][5740] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.084 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" iface="eth0" netns="" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.084 [INFO][5740] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.084 [INFO][5740] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.126 [INFO][5748] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.126 [INFO][5748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.126 [INFO][5748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.136 [WARNING][5748] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.136 [INFO][5748] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.139 [INFO][5748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:54.146315 containerd[1451]: 2026-03-06 01:43:54.143 [INFO][5740] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.146315 containerd[1451]: time="2026-03-06T01:43:54.146270852Z" level=info msg="TearDown network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\" successfully" Mar 6 01:43:54.146315 containerd[1451]: time="2026-03-06T01:43:54.146293224Z" level=info msg="StopPodSandbox for \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\" returns successfully" Mar 6 01:43:54.147101 containerd[1451]: time="2026-03-06T01:43:54.147059449Z" level=info msg="RemovePodSandbox for \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\"" Mar 6 01:43:54.147128 containerd[1451]: time="2026-03-06T01:43:54.147096547Z" level=info msg="Forcibly stopping sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\"" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.204 [WARNING][5766] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0", GenerateName:"calico-kube-controllers-6c6dbb68d8-", Namespace:"calico-system", SelfLink:"", UID:"29925f97-ffaa-4463-8aba-6f0558d0f689", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c6dbb68d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8665f1555bbc1283dc6407c0ffb6c7312280a913b60606a121cc415d50ac8d43", Pod:"calico-kube-controllers-6c6dbb68d8-nnps9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1753f41fd22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.205 [INFO][5766] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.205 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" iface="eth0" netns="" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.205 [INFO][5766] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.205 [INFO][5766] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.244 [INFO][5774] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.244 [INFO][5774] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.244 [INFO][5774] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.252 [WARNING][5774] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.252 [INFO][5774] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" HandleID="k8s-pod-network.751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Workload="localhost-k8s-calico--kube--controllers--6c6dbb68d8--nnps9-eth0" Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.255 [INFO][5774] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:54.263587 containerd[1451]: 2026-03-06 01:43:54.259 [INFO][5766] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e" Mar 6 01:43:54.263587 containerd[1451]: time="2026-03-06T01:43:54.263477179Z" level=info msg="TearDown network for sandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\" successfully" Mar 6 01:43:54.275914 containerd[1451]: time="2026-03-06T01:43:54.275835487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:54.276024 containerd[1451]: time="2026-03-06T01:43:54.275934050Z" level=info msg="RemovePodSandbox \"751fcd8d04b777abe43f8c7548e08538a12a4b38f67c851ea1f28fefb40d861e\" returns successfully" Mar 6 01:43:54.276939 containerd[1451]: time="2026-03-06T01:43:54.276875032Z" level=info msg="StopPodSandbox for \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\"" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.355 [WARNING][5791] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--w9fg5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9d73777b-f4c0-4c9b-90d2-bd41b4633f25", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f", Pod:"goldmane-5b85766d88-w9fg5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5c189e8586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.357 [INFO][5791] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.357 [INFO][5791] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" iface="eth0" netns="" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.357 [INFO][5791] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.357 [INFO][5791] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.708 [INFO][5799] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.708 [INFO][5799] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.708 [INFO][5799] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.719 [WARNING][5799] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.719 [INFO][5799] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.722 [INFO][5799] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:54.729448 containerd[1451]: 2026-03-06 01:43:54.724 [INFO][5791] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.729448 containerd[1451]: time="2026-03-06T01:43:54.729372624Z" level=info msg="TearDown network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\" successfully" Mar 6 01:43:54.729448 containerd[1451]: time="2026-03-06T01:43:54.729424309Z" level=info msg="StopPodSandbox for \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\" returns successfully" Mar 6 01:43:54.731966 containerd[1451]: time="2026-03-06T01:43:54.730936425Z" level=info msg="RemovePodSandbox for \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\"" Mar 6 01:43:54.731966 containerd[1451]: time="2026-03-06T01:43:54.730973545Z" level=info msg="Forcibly stopping sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\"" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.799 [WARNING][5817] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--w9fg5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9d73777b-f4c0-4c9b-90d2-bd41b4633f25", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"568557b72189d53c4c618f5b9c1e1749090f4849c5867564417938800c37927f", Pod:"goldmane-5b85766d88-w9fg5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5c189e8586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.800 [INFO][5817] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.800 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" iface="eth0" netns="" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.800 [INFO][5817] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.800 [INFO][5817] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.847 [INFO][5826] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.847 [INFO][5826] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.847 [INFO][5826] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.854 [WARNING][5826] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.854 [INFO][5826] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" HandleID="k8s-pod-network.956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Workload="localhost-k8s-goldmane--5b85766d88--w9fg5-eth0" Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.856 [INFO][5826] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:54.872745 containerd[1451]: 2026-03-06 01:43:54.860 [INFO][5817] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2" Mar 6 01:43:54.872745 containerd[1451]: time="2026-03-06T01:43:54.871998334Z" level=info msg="TearDown network for sandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\" successfully" Mar 6 01:43:54.878258 containerd[1451]: time="2026-03-06T01:43:54.878157233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:54.878258 containerd[1451]: time="2026-03-06T01:43:54.878263120Z" level=info msg="RemovePodSandbox \"956c11d2494243f2d676adf8e8c1799d9b3abc233701fb0eba8c6395ff4712d2\" returns successfully" Mar 6 01:43:54.879207 containerd[1451]: time="2026-03-06T01:43:54.879078238Z" level=info msg="StopPodSandbox for \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\"" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:54.941 [WARNING][5843] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dff04dd3-ef84-4619-a71a-c275e3897a95", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893", Pod:"coredns-674b8bbfcf-nh5sh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e80e7f6bfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:54.941 [INFO][5843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:54.941 [INFO][5843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" iface="eth0" netns="" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:54.941 [INFO][5843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:54.941 [INFO][5843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:55.018 [INFO][5852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:55.019 [INFO][5852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:55.019 [INFO][5852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:55.033 [WARNING][5852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:55.033 [INFO][5852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:55.036 [INFO][5852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:55.044391 containerd[1451]: 2026-03-06 01:43:55.040 [INFO][5843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.046858 containerd[1451]: time="2026-03-06T01:43:55.046560648Z" level=info msg="TearDown network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\" successfully" Mar 6 01:43:55.046858 containerd[1451]: time="2026-03-06T01:43:55.046601644Z" level=info msg="StopPodSandbox for \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\" returns successfully" Mar 6 01:43:55.047709 containerd[1451]: time="2026-03-06T01:43:55.047596471Z" level=info msg="RemovePodSandbox for \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\"" Mar 6 01:43:55.047709 containerd[1451]: time="2026-03-06T01:43:55.047689373Z" level=info msg="Forcibly stopping sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\"" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.116 [WARNING][5869] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dff04dd3-ef84-4619-a71a-c275e3897a95", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fde27a018083816b860ad4d6ec172fa0b304bbdf6ae47bf3adf1a64152d35893", Pod:"coredns-674b8bbfcf-nh5sh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e80e7f6bfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.117 [INFO][5869] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.117 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" iface="eth0" netns="" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.117 [INFO][5869] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.117 [INFO][5869] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.157 [INFO][5879] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.157 [INFO][5879] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.157 [INFO][5879] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.167 [WARNING][5879] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.168 [INFO][5879] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" HandleID="k8s-pod-network.72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Workload="localhost-k8s-coredns--674b8bbfcf--nh5sh-eth0" Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.173 [INFO][5879] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:55.181151 containerd[1451]: 2026-03-06 01:43:55.176 [INFO][5869] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036" Mar 6 01:43:55.181599 containerd[1451]: time="2026-03-06T01:43:55.181204285Z" level=info msg="TearDown network for sandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\" successfully" Mar 6 01:43:55.187495 containerd[1451]: time="2026-03-06T01:43:55.187387503Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:55.187495 containerd[1451]: time="2026-03-06T01:43:55.187475176Z" level=info msg="RemovePodSandbox \"72500393f7d778c162e9e1877bce2fb878a58d469c89e307d7b917302e759036\" returns successfully" Mar 6 01:43:55.188358 containerd[1451]: time="2026-03-06T01:43:55.188298938Z" level=info msg="StopPodSandbox for \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\"" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.249 [WARNING][5896] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"a165b4e4-ca12-4318-93a2-9f1d976fbb5d", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665", Pod:"calico-apiserver-8687f94789-q6vc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2a5a21e814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.249 [INFO][5896] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.249 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" iface="eth0" netns="" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.249 [INFO][5896] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.249 [INFO][5896] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.294 [INFO][5904] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.294 [INFO][5904] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.294 [INFO][5904] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.302 [WARNING][5904] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.302 [INFO][5904] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.305 [INFO][5904] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:55.310875 containerd[1451]: 2026-03-06 01:43:55.307 [INFO][5896] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.310875 containerd[1451]: time="2026-03-06T01:43:55.310463304Z" level=info msg="TearDown network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\" successfully" Mar 6 01:43:55.310875 containerd[1451]: time="2026-03-06T01:43:55.310489542Z" level=info msg="StopPodSandbox for \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\" returns successfully" Mar 6 01:43:55.312312 containerd[1451]: time="2026-03-06T01:43:55.311382284Z" level=info msg="RemovePodSandbox for \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\"" Mar 6 01:43:55.312312 containerd[1451]: time="2026-03-06T01:43:55.311418662Z" level=info msg="Forcibly stopping sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\"" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.369 [WARNING][5921] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0", GenerateName:"calico-apiserver-8687f94789-", Namespace:"calico-system", SelfLink:"", UID:"a165b4e4-ca12-4318-93a2-9f1d976fbb5d", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8687f94789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b955d2625c6781d4b4b7ed6052d11e504fd1bf95ea852dbe54923d55973b665", Pod:"calico-apiserver-8687f94789-q6vc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2a5a21e814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.369 [INFO][5921] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.369 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" iface="eth0" netns="" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.369 [INFO][5921] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.369 [INFO][5921] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.403 [INFO][5929] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.403 [INFO][5929] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.403 [INFO][5929] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.412 [WARNING][5929] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.412 [INFO][5929] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" HandleID="k8s-pod-network.aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Workload="localhost-k8s-calico--apiserver--8687f94789--q6vc5-eth0" Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.415 [INFO][5929] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:55.422603 containerd[1451]: 2026-03-06 01:43:55.419 [INFO][5921] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5" Mar 6 01:43:55.422603 containerd[1451]: time="2026-03-06T01:43:55.422561397Z" level=info msg="TearDown network for sandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\" successfully" Mar 6 01:43:55.433029 containerd[1451]: time="2026-03-06T01:43:55.432936193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:43:55.433148 containerd[1451]: time="2026-03-06T01:43:55.433037060Z" level=info msg="RemovePodSandbox \"aaab69f2518e68548491b2f1f23335f256939ed7b061815d84359e6f6518abb5\" returns successfully" Mar 6 01:43:56.607948 kubelet[2584]: I0306 01:43:56.607319 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:44:01.650589 kubelet[2584]: E0306 01:44:01.650347 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:03.294166 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:57390.service - OpenSSH per-connection server daemon (10.0.0.1:57390). Mar 6 01:44:03.373665 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 57390 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:03.376991 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:03.382880 systemd-logind[1439]: New session 10 of user core. Mar 6 01:44:03.393180 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:44:04.036905 sshd[5967]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:04.041366 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:57390.service: Deactivated successfully. Mar 6 01:44:04.044005 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:44:04.046846 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:44:04.049541 systemd-logind[1439]: Removed session 10. Mar 6 01:44:08.648593 kubelet[2584]: E0306 01:44:08.648378 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:09.064367 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:57400.service - OpenSSH per-connection server daemon (10.0.0.1:57400). Mar 6 01:44:09.116302 sshd[6008]: Accepted publickey for core from 10.0.0.1 port 57400 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:09.118204 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:09.125083 systemd-logind[1439]: New session 11 of user core. Mar 6 01:44:09.139224 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:44:09.361350 sshd[6008]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:09.371262 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:57400.service: Deactivated successfully. Mar 6 01:44:09.374090 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:44:09.375262 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:44:09.377105 systemd-logind[1439]: Removed session 11. Mar 6 01:44:13.648726 kubelet[2584]: E0306 01:44:13.648688 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:14.385271 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:47996.service - OpenSSH per-connection server daemon (10.0.0.1:47996). Mar 6 01:44:14.424461 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 47996 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:14.426729 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:14.432253 systemd-logind[1439]: New session 12 of user core. Mar 6 01:44:14.440130 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:44:14.610029 sshd[6069]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:14.615552 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:47996.service: Deactivated successfully. Mar 6 01:44:14.617855 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:44:14.619115 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:44:14.620863 systemd-logind[1439]: Removed session 12. Mar 6 01:44:15.405868 kubelet[2584]: I0306 01:44:15.405210 2584 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:44:17.649851 kubelet[2584]: E0306 01:44:17.648541 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:19.621971 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:48002.service - OpenSSH per-connection server daemon (10.0.0.1:48002). Mar 6 01:44:19.711413 sshd[6098]: Accepted publickey for core from 10.0.0.1 port 48002 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:19.713891 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:19.721949 systemd-logind[1439]: New session 13 of user core. Mar 6 01:44:19.730011 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:44:19.928676 sshd[6098]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:19.935429 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:48002.service: Deactivated successfully. Mar 6 01:44:19.938691 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:44:19.941602 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:44:19.943962 systemd-logind[1439]: Removed session 13. Mar 6 01:44:24.941985 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:43930.service - OpenSSH per-connection server daemon (10.0.0.1:43930). Mar 6 01:44:24.993026 sshd[6134]: Accepted publickey for core from 10.0.0.1 port 43930 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:24.995996 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:25.004261 systemd-logind[1439]: New session 14 of user core. Mar 6 01:44:25.013093 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:44:25.187006 sshd[6134]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:25.191286 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:43930.service: Deactivated successfully. Mar 6 01:44:25.194105 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:44:25.196264 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:44:25.198497 systemd-logind[1439]: Removed session 14. Mar 6 01:44:30.220330 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:39726.service - OpenSSH per-connection server daemon (10.0.0.1:39726). Mar 6 01:44:30.297857 sshd[6169]: Accepted publickey for core from 10.0.0.1 port 39726 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:30.300469 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:30.307707 systemd-logind[1439]: New session 15 of user core. Mar 6 01:44:30.329162 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:44:30.499106 sshd[6169]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:30.511479 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:39726.service: Deactivated successfully. Mar 6 01:44:30.513868 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:44:30.515551 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:44:30.523141 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:39732.service - OpenSSH per-connection server daemon (10.0.0.1:39732). Mar 6 01:44:30.524888 systemd-logind[1439]: Removed session 15. Mar 6 01:44:30.560439 sshd[6186]: Accepted publickey for core from 10.0.0.1 port 39732 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:30.562303 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:30.568123 systemd-logind[1439]: New session 16 of user core. Mar 6 01:44:30.582989 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:44:30.776305 sshd[6186]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:30.787859 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:39732.service: Deactivated successfully. Mar 6 01:44:30.790042 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:44:30.794550 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:44:30.802368 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:39736.service - OpenSSH per-connection server daemon (10.0.0.1:39736). Mar 6 01:44:30.805080 systemd-logind[1439]: Removed session 16. Mar 6 01:44:30.838012 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 39736 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:30.840369 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:30.847483 systemd-logind[1439]: New session 17 of user core. Mar 6 01:44:30.856147 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:44:30.994311 sshd[6199]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:30.998507 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:39736.service: Deactivated successfully. Mar 6 01:44:31.001045 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:44:31.002398 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:44:31.004564 systemd-logind[1439]: Removed session 17. Mar 6 01:44:36.017087 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:39738.service - OpenSSH per-connection server daemon (10.0.0.1:39738). Mar 6 01:44:36.063891 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 39738 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:36.066156 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:36.074378 systemd-logind[1439]: New session 18 of user core. Mar 6 01:44:36.095247 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:44:36.277533 sshd[6235]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:36.284464 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:39738.service: Deactivated successfully. Mar 6 01:44:36.288625 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:44:36.290537 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:44:36.293909 systemd-logind[1439]: Removed session 18. Mar 6 01:44:36.671258 kubelet[2584]: E0306 01:44:36.671074 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:41.312254 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:40954.service - OpenSSH per-connection server daemon (10.0.0.1:40954). Mar 6 01:44:41.351638 sshd[6249]: Accepted publickey for core from 10.0.0.1 port 40954 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:41.354034 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:41.362454 systemd-logind[1439]: New session 19 of user core. Mar 6 01:44:41.367111 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:44:41.522288 systemd[1]: run-containerd-runc-k8s.io-fe8dbc9108d7d640b9f83f66cbe60f112eab6c947e99222d7b2b3ff45769bb2f-runc.Zd9mtc.mount: Deactivated successfully. Mar 6 01:44:41.654732 sshd[6249]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:41.668006 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:40954.service: Deactivated successfully. Mar 6 01:44:41.686437 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:44:41.688513 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:44:41.698411 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:40970.service - OpenSSH per-connection server daemon (10.0.0.1:40970). Mar 6 01:44:41.700715 systemd-logind[1439]: Removed session 19. Mar 6 01:44:41.769850 sshd[6285]: Accepted publickey for core from 10.0.0.1 port 40970 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:41.792126 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:41.801559 systemd-logind[1439]: New session 20 of user core. Mar 6 01:44:41.807083 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:44:42.350310 sshd[6285]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:42.358347 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:40970.service: Deactivated successfully. Mar 6 01:44:42.361110 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:44:42.367298 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:44:42.379223 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:40974.service - OpenSSH per-connection server daemon (10.0.0.1:40974). Mar 6 01:44:42.380645 systemd-logind[1439]: Removed session 20. Mar 6 01:44:42.447543 sshd[6320]: Accepted publickey for core from 10.0.0.1 port 40974 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:42.451030 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:42.457822 systemd-logind[1439]: New session 21 of user core. Mar 6 01:44:42.476171 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 01:44:42.648715 kubelet[2584]: E0306 01:44:42.648524 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:43.324427 sshd[6320]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:43.338149 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:40974.service: Deactivated successfully. Mar 6 01:44:43.341217 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 01:44:43.346457 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Mar 6 01:44:43.354970 systemd[1]: Started sshd@21-10.0.0.94:22-10.0.0.1:40990.service - OpenSSH per-connection server daemon (10.0.0.1:40990). Mar 6 01:44:43.358480 systemd-logind[1439]: Removed session 21. Mar 6 01:44:43.410541 sshd[6368]: Accepted publickey for core from 10.0.0.1 port 40990 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:43.413724 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:43.428342 systemd-logind[1439]: New session 22 of user core. Mar 6 01:44:43.438893 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 01:44:43.876054 sshd[6368]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:43.885015 systemd[1]: sshd@21-10.0.0.94:22-10.0.0.1:40990.service: Deactivated successfully. Mar 6 01:44:43.888606 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 01:44:43.891901 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Mar 6 01:44:43.907522 systemd[1]: Started sshd@22-10.0.0.94:22-10.0.0.1:40998.service - OpenSSH per-connection server daemon (10.0.0.1:40998). Mar 6 01:44:43.909204 systemd-logind[1439]: Removed session 22. Mar 6 01:44:43.951705 sshd[6380]: Accepted publickey for core from 10.0.0.1 port 40998 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:43.954323 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:43.965292 systemd-logind[1439]: New session 23 of user core. Mar 6 01:44:43.975398 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 01:44:44.175447 sshd[6380]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:44.182996 systemd[1]: sshd@22-10.0.0.94:22-10.0.0.1:40998.service: Deactivated successfully. Mar 6 01:44:44.185864 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 01:44:44.187937 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Mar 6 01:44:44.191504 systemd-logind[1439]: Removed session 23. Mar 6 01:44:49.190462 systemd[1]: Started sshd@23-10.0.0.94:22-10.0.0.1:41004.service - OpenSSH per-connection server daemon (10.0.0.1:41004). Mar 6 01:44:49.238832 sshd[6394]: Accepted publickey for core from 10.0.0.1 port 41004 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:49.242014 sshd[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:49.251561 systemd-logind[1439]: New session 24 of user core. Mar 6 01:44:49.259147 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 01:44:49.414043 sshd[6394]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:49.420023 systemd[1]: sshd@23-10.0.0.94:22-10.0.0.1:41004.service: Deactivated successfully. Mar 6 01:44:49.422560 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 01:44:49.424587 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Mar 6 01:44:49.426916 systemd-logind[1439]: Removed session 24. Mar 6 01:44:54.439214 systemd[1]: Started sshd@24-10.0.0.94:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). Mar 6 01:44:54.488948 sshd[6412]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:54.491501 sshd[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:54.498783 systemd-logind[1439]: New session 25 of user core. Mar 6 01:44:54.509032 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 01:44:54.651265 sshd[6412]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:54.655884 systemd[1]: sshd@24-10.0.0.94:22-10.0.0.1:42440.service: Deactivated successfully. Mar 6 01:44:54.658199 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 01:44:54.659175 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Mar 6 01:44:54.660661 systemd-logind[1439]: Removed session 25. Mar 6 01:44:55.648431 kubelet[2584]: E0306 01:44:55.648358 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:59.700384 systemd[1]: Started sshd@25-10.0.0.94:22-10.0.0.1:42452.service - OpenSSH per-connection server daemon (10.0.0.1:42452). Mar 6 01:44:59.806168 sshd[6438]: Accepted publickey for core from 10.0.0.1 port 42452 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:59.809056 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:59.822354 systemd-logind[1439]: New session 26 of user core. Mar 6 01:44:59.829829 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 01:45:00.123098 sshd[6438]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:00.132394 systemd[1]: sshd@25-10.0.0.94:22-10.0.0.1:42452.service: Deactivated successfully. Mar 6 01:45:00.141192 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 01:45:00.147897 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. Mar 6 01:45:00.155665 systemd-logind[1439]: Removed session 26.