Mar 4 00:59:10.365980 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 00:59:10.366016 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 00:59:10.366034 kernel: BIOS-provided physical RAM map: Mar 4 00:59:10.366044 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 4 00:59:10.366052 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 4 00:59:10.366062 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 4 00:59:10.366073 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 4 00:59:10.366082 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 4 00:59:10.366091 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 00:59:10.366105 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 4 00:59:10.366114 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 4 00:59:10.366124 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 4 00:59:10.366177 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 4 00:59:10.366188 kernel: NX (Execute Disable) protection: active Mar 4 00:59:10.366199 kernel: APIC: Static calls initialized Mar 4 00:59:10.366251 kernel: SMBIOS 2.8 present. Mar 4 00:59:10.366262 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 4 00:59:10.366271 kernel: Hypervisor detected: KVM Mar 4 00:59:10.366280 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 00:59:10.366290 kernel: kvm-clock: using sched offset of 11593418649 cycles Mar 4 00:59:10.366300 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 00:59:10.366310 kernel: tsc: Detected 2445.426 MHz processor Mar 4 00:59:10.366321 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 00:59:10.366331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 00:59:10.366346 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 4 00:59:10.366358 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 4 00:59:10.366367 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 00:59:10.366378 kernel: Using GB pages for direct mapping Mar 4 00:59:10.366387 kernel: ACPI: Early table checksum verification disabled Mar 4 00:59:10.366397 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 4 00:59:10.366407 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:59:10.366417 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:59:10.366427 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:59:10.366441 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 4 00:59:10.366451 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:59:10.366461 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:59:10.366471 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:59:10.366480 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:59:10.366490 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 4 00:59:10.366500 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 4 00:59:10.366517 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 4 00:59:10.366621 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 4 00:59:10.366632 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 4 00:59:10.366643 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 4 00:59:10.366654 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 4 00:59:10.366665 kernel: No NUMA configuration found Mar 4 00:59:10.366676 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 4 00:59:10.366693 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 4 00:59:10.366704 kernel: Zone ranges: Mar 4 00:59:10.366715 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 00:59:10.366726 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 4 00:59:10.366736 kernel: Normal empty Mar 4 00:59:10.366746 kernel: Movable zone start for each node Mar 4 00:59:10.366757 kernel: Early memory node ranges Mar 4 00:59:10.366767 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 4 00:59:10.366777 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 4 00:59:10.366792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 4 00:59:10.366802 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 00:59:10.366853 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 4 00:59:10.366865 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 4 00:59:10.366930 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 00:59:10.366941 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 00:59:10.366951 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 00:59:10.366962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 00:59:10.366972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 00:59:10.366988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 00:59:10.366999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 00:59:10.367010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 00:59:10.367020 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 00:59:10.367031 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 4 00:59:10.367041 kernel: TSC deadline timer available Mar 4 00:59:10.367052 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 4 00:59:10.367063 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 00:59:10.367073 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 4 00:59:10.367130 kernel: kvm-guest: setup PV sched yield Mar 4 00:59:10.367143 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 4 00:59:10.367154 kernel: Booting paravirtualized kernel on KVM Mar 4 00:59:10.367166 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 00:59:10.367177 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 4 00:59:10.367189 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 4 00:59:10.367199 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 4 00:59:10.367210 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 4 00:59:10.367221 kernel: kvm-guest: PV spinlocks enabled Mar 4 00:59:10.367237 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 00:59:10.367249 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 00:59:10.367261 kernel: random: crng init done Mar 4 00:59:10.367272 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:59:10.367283 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:59:10.367293 kernel: Fallback order for Node 0: 0 Mar 4 00:59:10.367305 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 4 00:59:10.367316 kernel: Policy zone: DMA32 Mar 4 00:59:10.367331 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:59:10.367343 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 4 00:59:10.367354 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 4 00:59:10.367365 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 00:59:10.367375 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 00:59:10.367385 kernel: Dynamic Preempt: voluntary Mar 4 00:59:10.367394 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:59:10.367406 kernel: rcu: RCU event tracing is enabled. Mar 4 00:59:10.367417 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 4 00:59:10.367433 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:59:10.367444 kernel: Rude variant of Tasks RCU enabled. Mar 4 00:59:10.367455 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:59:10.367467 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:59:10.367478 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 4 00:59:10.367602 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 4 00:59:10.367615 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:59:10.367627 kernel: Console: colour VGA+ 80x25 Mar 4 00:59:10.367637 kernel: printk: console [ttyS0] enabled Mar 4 00:59:10.367654 kernel: ACPI: Core revision 20230628 Mar 4 00:59:10.367665 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 4 00:59:10.367676 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 00:59:10.367688 kernel: x2apic enabled Mar 4 00:59:10.367699 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 00:59:10.367710 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 4 00:59:10.367721 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 4 00:59:10.367732 kernel: kvm-guest: setup PV IPIs Mar 4 00:59:10.367743 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 4 00:59:10.367772 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 4 00:59:10.367784 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 4 00:59:10.367795 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 00:59:10.367811 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 4 00:59:10.367822 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 4 00:59:10.367834 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 00:59:10.367846 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 00:59:10.367858 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 00:59:10.368497 kernel: Speculative Store Bypass: Vulnerable Mar 4 00:59:10.368515 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 4 00:59:10.368653 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 4 00:59:10.368669 kernel: active return thunk: srso_alias_return_thunk Mar 4 00:59:10.368683 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 4 00:59:10.368696 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 4 00:59:10.368708 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 4 00:59:10.368720 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 00:59:10.368738 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 00:59:10.368750 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 00:59:10.368763 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 00:59:10.368776 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 4 00:59:10.368789 kernel: Freeing SMP alternatives memory: 32K Mar 4 00:59:10.368801 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:59:10.368815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:59:10.368828 kernel: landlock: Up and running. Mar 4 00:59:10.368841 kernel: SELinux: Initializing. Mar 4 00:59:10.368860 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:59:10.368927 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:59:10.368941 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 4 00:59:10.368953 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 00:59:10.368965 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 00:59:10.368977 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 00:59:10.368990 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 4 00:59:10.369002 kernel: signal: max sigframe size: 1776 Mar 4 00:59:10.369055 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:59:10.369078 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:59:10.369091 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 00:59:10.369103 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:59:10.369114 kernel: smpboot: x86: Booting SMP configuration: Mar 4 00:59:10.369126 kernel: .... node #0, CPUs: #1 #2 #3 Mar 4 00:59:10.369138 kernel: smp: Brought up 1 node, 4 CPUs Mar 4 00:59:10.369150 kernel: smpboot: Max logical packages: 1 Mar 4 00:59:10.369162 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 4 00:59:10.369175 kernel: devtmpfs: initialized Mar 4 00:59:10.369194 kernel: x86/mm: Memory block size: 128MB Mar 4 00:59:10.369207 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:59:10.369220 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 4 00:59:10.369232 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:59:10.369243 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:59:10.369254 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:59:10.369266 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:59:10.369278 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 00:59:10.369291 kernel: audit: type=2000 audit(1772585944.477:1): state=initialized audit_enabled=0 res=1 Mar 4 00:59:10.369309 kernel: cpuidle: using governor menu Mar 4 00:59:10.369322 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:59:10.369335 kernel: dca service started, version 1.12.1 Mar 4 00:59:10.369348 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 00:59:10.369361 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 00:59:10.369374 kernel: PCI: Using configuration type 1 for base access Mar 4 00:59:10.369386 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 00:59:10.369397 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:59:10.369407 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:59:10.369423 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:59:10.369435 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:59:10.369446 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:59:10.369458 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:59:10.369470 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:59:10.369483 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:59:10.369496 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 00:59:10.369508 kernel: ACPI: Interpreter enabled Mar 4 00:59:10.369521 kernel: ACPI: PM: (supports S0 S3 S5) Mar 4 00:59:10.369634 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 00:59:10.369649 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 00:59:10.369661 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 00:59:10.369674 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 00:59:10.369685 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 00:59:10.370285 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 00:59:10.370519 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 4 00:59:10.372465 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 4 00:59:10.372489 kernel: PCI host bridge to bus 0000:00 Mar 4 00:59:10.372800 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 00:59:10.373069 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 00:59:10.373254 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 00:59:10.373431 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 4 00:59:10.373755 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 00:59:10.374010 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 4 00:59:10.374196 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 00:59:10.374461 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 00:59:10.374755 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 4 00:59:10.375021 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 4 00:59:10.375497 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 4 00:59:10.375964 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 4 00:59:10.376179 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 00:59:10.376384 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 4 00:59:10.379424 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 4 00:59:10.380226 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 4 00:59:10.380452 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 4 00:59:10.381162 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 4 00:59:10.381373 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 4 00:59:10.381702 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 4 00:59:10.381962 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 4 00:59:10.382173 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 4 00:59:10.382363 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 4 00:59:10.382651 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 4 00:59:10.383185 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 4 00:59:10.383391 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 4 00:59:10.383928 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 00:59:10.384135 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 00:59:10.384341 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 00:59:10.384790 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 4 00:59:10.385114 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 4 00:59:10.385329 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 00:59:10.385654 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 4 00:59:10.385675 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 00:59:10.385688 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 00:59:10.385700 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 00:59:10.385711 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 00:59:10.385723 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 00:59:10.385735 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 00:59:10.385746 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 00:59:10.385757 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 00:59:10.385777 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 00:59:10.385788 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 00:59:10.385800 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 00:59:10.385811 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 00:59:10.385823 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 00:59:10.385835 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 00:59:10.385846 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 00:59:10.385858 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 00:59:10.388266 kernel: iommu: Default domain type: Translated Mar 4 00:59:10.388301 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 00:59:10.388314 kernel: PCI: Using ACPI for IRQ routing Mar 4 00:59:10.388325 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 00:59:10.388337 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 4 00:59:10.388348 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 4 00:59:10.388759 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 00:59:10.389844 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 00:59:10.390137 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 00:59:10.390167 kernel: vgaarb: loaded Mar 4 00:59:10.390180 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 4 00:59:10.390192 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 4 00:59:10.390204 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 00:59:10.390217 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:59:10.390229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:59:10.390240 kernel: pnp: PnP ACPI init Mar 4 00:59:10.390457 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 00:59:10.390483 kernel: pnp: PnP ACPI: found 6 devices Mar 4 00:59:10.390496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 00:59:10.390508 kernel: NET: Registered PF_INET protocol family Mar 4 00:59:10.390519 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:59:10.390715 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:59:10.390728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:59:10.390739 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:59:10.390750 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:59:10.390761 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:59:10.390780 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:59:10.390791 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:59:10.390802 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:59:10.390813 kernel: NET: Registered PF_XDP protocol family Mar 4 00:59:10.391070 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 00:59:10.391320 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 00:59:10.391502 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 00:59:10.393251 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 4 00:59:10.393437 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 00:59:10.393806 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 4 00:59:10.393825 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:59:10.393836 kernel: Initialise system trusted keyrings Mar 4 00:59:10.393847 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:59:10.393858 kernel: Key type asymmetric registered Mar 4 00:59:10.393925 kernel: Asymmetric key parser 'x509' registered Mar 4 00:59:10.393940 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 00:59:10.393953 kernel: io scheduler mq-deadline registered Mar 4 00:59:10.393972 kernel: io scheduler kyber registered Mar 4 00:59:10.393983 kernel: io scheduler bfq registered Mar 4 00:59:10.393995 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 00:59:10.394155 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 00:59:10.394174 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 00:59:10.394186 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 00:59:10.394197 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:59:10.394208 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 00:59:10.394219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 00:59:10.394229 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 00:59:10.394246 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 00:59:10.394698 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 4 00:59:10.394722 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 00:59:10.394987 kernel: rtc_cmos 00:04: registered as rtc0 Mar 4 00:59:10.395180 kernel: rtc_cmos 00:04: setting system clock to 2026-03-04T00:59:08 UTC (1772585948) Mar 4 00:59:10.395370 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 4 00:59:10.395388 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 4 00:59:10.395408 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:59:10.395420 kernel: Segment Routing with IPv6 Mar 4 00:59:10.395432 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:59:10.395443 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:59:10.395455 kernel: Key type dns_resolver registered Mar 4 00:59:10.395466 kernel: IPI shorthand broadcast: enabled Mar 4 00:59:10.395478 kernel: sched_clock: Marking stable (3286057361, 1056528601)->(5376431952, -1033845990) Mar 4 00:59:10.395490 kernel: registered taskstats version 1 Mar 4 00:59:10.395501 kernel: Loading compiled-in X.509 certificates Mar 4 00:59:10.395512 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 00:59:10.395719 kernel: Key type .fscrypt registered Mar 4 00:59:10.395734 kernel: Key type fscrypt-provisioning registered Mar 4 00:59:10.395747 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:59:10.395759 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:59:10.395770 kernel: ima: No architecture policies found Mar 4 00:59:10.395782 kernel: clk: Disabling unused clocks Mar 4 00:59:10.395793 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 00:59:10.395804 kernel: Write protecting the kernel read-only data: 36864k Mar 4 00:59:10.395822 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 00:59:10.395833 kernel: Run /init as init process Mar 4 00:59:10.395845 kernel: with arguments: Mar 4 00:59:10.395857 kernel: /init Mar 4 00:59:10.395923 kernel: with environment: Mar 4 00:59:10.395937 kernel: HOME=/ Mar 4 00:59:10.395948 kernel: TERM=linux Mar 4 00:59:10.395963 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:59:10.395983 systemd[1]: Detected virtualization kvm. Mar 4 00:59:10.395996 systemd[1]: Detected architecture x86-64. Mar 4 00:59:10.396008 systemd[1]: Running in initrd. Mar 4 00:59:10.396021 systemd[1]: No hostname configured, using default hostname. Mar 4 00:59:10.396032 systemd[1]: Hostname set to . Mar 4 00:59:10.396044 systemd[1]: Initializing machine ID from VM UUID. Mar 4 00:59:10.396056 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:59:10.396068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:59:10.396087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:59:10.396101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:59:10.396114 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:59:10.396127 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:59:10.396141 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:59:10.396155 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:59:10.396167 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:59:10.396185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:59:10.396196 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:59:10.396207 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:59:10.396219 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:59:10.396232 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:59:10.396270 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:59:10.396291 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:59:10.396305 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:59:10.396318 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:59:10.396332 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:59:10.396344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:59:10.396356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:59:10.396369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:59:10.396381 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:59:10.396393 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:59:10.396409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:59:10.396421 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:59:10.396433 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:59:10.396444 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:59:10.396455 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:59:10.396466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:59:10.396478 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:59:10.398065 systemd-journald[194]: Collecting audit messages is disabled. Mar 4 00:59:10.398119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:59:10.398134 systemd-journald[194]: Journal started Mar 4 00:59:10.398164 systemd-journald[194]: Runtime Journal (/run/log/journal/128c2dc296d34674a132743b26c84110) is 6.0M, max 48.4M, 42.3M free. Mar 4 00:59:10.421368 systemd-modules-load[195]: Inserted module 'overlay' Mar 4 00:59:10.455230 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:59:10.482158 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:59:10.540126 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:59:10.555414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:59:10.595988 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:59:10.613217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:59:10.696000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:59:11.141052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:59:11.141106 kernel: Bridge firewalling registered Mar 4 00:59:10.808066 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 4 00:59:11.125863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:59:11.141986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:59:11.199776 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:59:11.209184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:59:11.261005 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:59:11.276046 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:59:11.342174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:59:11.410228 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:59:11.450865 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:59:11.503241 dracut-cmdline[234]: dracut-dracut-053 Mar 4 00:59:11.514991 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 00:59:11.543426 systemd-resolved[224]: Positive Trust Anchors: Mar 4 00:59:11.543438 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:59:11.543481 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:59:11.556819 systemd-resolved[224]: Defaulting to hostname 'linux'. Mar 4 00:59:11.575215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:59:11.584081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:59:11.956241 kernel: SCSI subsystem initialized Mar 4 00:59:11.981148 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:59:12.030765 kernel: iscsi: registered transport (tcp) Mar 4 00:59:12.101140 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:59:12.101225 kernel: QLogic iSCSI HBA Driver Mar 4 00:59:12.301987 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:59:12.326287 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:59:12.432681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:59:12.432767 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:59:12.434031 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:59:12.541985 kernel: raid6: avx2x4 gen() 18440 MB/s Mar 4 00:59:12.561717 kernel: raid6: avx2x2 gen() 18886 MB/s Mar 4 00:59:12.584461 kernel: raid6: avx2x1 gen() 11236 MB/s Mar 4 00:59:12.584658 kernel: raid6: using algorithm avx2x2 gen() 18886 MB/s Mar 4 00:59:12.608258 kernel: raid6: .... xor() 14994 MB/s, rmw enabled Mar 4 00:59:12.608347 kernel: raid6: using avx2x2 recovery algorithm Mar 4 00:59:12.664092 kernel: xor: automatically using best checksumming function avx Mar 4 00:59:13.315036 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:59:13.354451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:59:13.384141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:59:13.417860 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 4 00:59:13.444697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:59:13.500007 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:59:13.561079 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 4 00:59:13.688222 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:59:13.726145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:59:13.968672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:59:14.033425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:59:14.115327 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:59:14.130758 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:59:14.159309 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:59:14.182383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:59:14.229972 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:59:14.319733 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 4 00:59:14.320280 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 00:59:14.324868 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:59:14.327369 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:59:14.357399 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:59:14.371603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:59:14.371991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:59:14.442894 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 4 00:59:14.406420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:59:14.482139 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 00:59:14.482192 kernel: GPT:9289727 != 19775487 Mar 4 00:59:14.482213 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 00:59:14.482229 kernel: GPT:9289727 != 19775487 Mar 4 00:59:14.483999 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 00:59:14.484306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 00:59:14.527725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:59:14.554412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:59:14.570136 kernel: libata version 3.00 loaded. Mar 4 00:59:14.644755 kernel: AVX2 version of gcm_enc/dec engaged. Mar 4 00:59:14.659346 kernel: AES CTR mode by8 optimization enabled Mar 4 00:59:14.660773 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 00:59:14.661722 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 00:59:14.672669 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 00:59:14.673274 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 00:59:14.677191 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 00:59:15.205040 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (472) Mar 4 00:59:15.205082 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (481) Mar 4 00:59:15.205096 kernel: scsi host0: ahci Mar 4 00:59:15.205669 kernel: scsi host1: ahci Mar 4 00:59:15.205997 kernel: scsi host2: ahci Mar 4 00:59:15.206243 kernel: scsi host3: ahci Mar 4 00:59:15.206424 kernel: scsi host4: ahci Mar 4 00:59:15.206698 kernel: scsi host5: ahci Mar 4 00:59:15.206867 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 4 00:59:15.206879 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 4 00:59:15.206889 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 4 00:59:15.206899 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 4 00:59:15.206909 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 4 00:59:15.206971 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 4 00:59:15.206987 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:15.206997 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 4 00:59:15.207007 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 4 00:59:15.207016 kernel: ata3.00: applying bridge limits Mar 4 00:59:15.207026 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:15.207036 kernel: ata3.00: configured for UDMA/100 Mar 4 00:59:15.207045 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:15.207055 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:15.207065 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:15.207081 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 4 00:59:15.207326 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 4 00:59:15.207507 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:59:15.216410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:59:15.225787 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 00:59:15.245669 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 4 00:59:15.249092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 00:59:15.253145 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 00:59:15.268398 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 00:59:15.318435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:59:15.324485 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:59:15.372019 disk-uuid[573]: Primary Header is updated. Mar 4 00:59:15.372019 disk-uuid[573]: Secondary Entries is updated. Mar 4 00:59:15.372019 disk-uuid[573]: Secondary Header is updated. Mar 4 00:59:15.386843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 00:59:15.414153 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:59:16.476462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 00:59:16.483882 disk-uuid[578]: The operation has completed successfully. Mar 4 00:59:16.752266 kernel: hrtimer: interrupt took 3121907 ns Mar 4 00:59:17.005196 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:59:17.013436 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:59:17.661417 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:59:18.499430 sh[599]: Success Mar 4 00:59:19.000267 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 4 00:59:19.562176 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:59:19.633279 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:59:19.675755 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:59:19.758442 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 00:59:19.758763 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:19.758853 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:59:19.764169 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:59:19.768321 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:59:19.883465 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:59:19.905460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:59:19.954142 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:59:19.984360 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:59:20.038863 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:20.043071 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:20.043221 kernel: BTRFS info (device vda6): using free space tree Mar 4 00:59:20.073375 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 00:59:20.160937 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 00:59:20.180372 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:20.256156 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:59:20.312933 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:59:20.669380 ignition[697]: Ignition 2.19.0 Mar 4 00:59:20.669451 ignition[697]: Stage: fetch-offline Mar 4 00:59:20.669519 ignition[697]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:20.669646 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:20.669791 ignition[697]: parsed url from cmdline: "" Mar 4 00:59:20.669798 ignition[697]: no config URL provided Mar 4 00:59:20.669807 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:59:20.669822 ignition[697]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:59:20.669878 ignition[697]: op(1): [started] loading QEMU firmware config module Mar 4 00:59:20.669887 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 4 00:59:20.772937 ignition[697]: op(1): [finished] loading QEMU firmware config module Mar 4 00:59:20.800864 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:59:20.861390 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:59:20.978823 systemd-networkd[788]: lo: Link UP Mar 4 00:59:20.983747 systemd-networkd[788]: lo: Gained carrier Mar 4 00:59:21.010952 systemd-networkd[788]: Enumeration completed Mar 4 00:59:21.015418 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:59:21.023746 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:21.023754 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:59:21.043435 systemd[1]: Reached target network.target - Network. Mar 4 00:59:21.077822 systemd-networkd[788]: eth0: Link UP Mar 4 00:59:21.077829 systemd-networkd[788]: eth0: Gained carrier Mar 4 00:59:21.077845 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:21.133345 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 00:59:21.663834 ignition[697]: parsing config with SHA512: d4b3c059c615ad1047a5dd7da34fde2a3a2baf0050a6398dac33a17136c6417366e828dce71b6e9b637f4a064a117b554370b961026a89c964486afb2bdbf72c Mar 4 00:59:21.686425 unknown[697]: fetched base config from "system" Mar 4 00:59:21.688844 unknown[697]: fetched user config from "qemu" Mar 4 00:59:21.710769 ignition[697]: fetch-offline: fetch-offline passed Mar 4 00:59:21.710930 ignition[697]: Ignition finished successfully Mar 4 00:59:21.730457 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:59:21.755267 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 4 00:59:21.800481 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:59:21.944787 ignition[792]: Ignition 2.19.0 Mar 4 00:59:21.944854 ignition[792]: Stage: kargs Mar 4 00:59:21.945218 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:21.961380 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:59:21.945233 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:21.951518 ignition[792]: kargs: kargs passed Mar 4 00:59:21.951693 ignition[792]: Ignition finished successfully Mar 4 00:59:22.019142 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:59:22.129333 ignition[800]: Ignition 2.19.0 Mar 4 00:59:22.129396 ignition[800]: Stage: disks Mar 4 00:59:22.129793 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:22.156804 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:59:22.129812 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:22.189342 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:59:22.138869 ignition[800]: disks: disks passed Mar 4 00:59:22.242285 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:59:22.144361 ignition[800]: Ignition finished successfully Mar 4 00:59:22.259621 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:59:22.320077 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:59:22.355707 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:59:22.444404 systemd-networkd[788]: eth0: Gained IPv6LL Mar 4 00:59:22.451842 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:59:22.523267 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 4 00:59:22.562309 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:59:22.611683 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:59:23.811886 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 00:59:23.824140 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:59:23.849227 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:59:23.899699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:59:23.926477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:59:24.004821 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 4 00:59:23.944654 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 00:59:24.069798 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:24.069842 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:24.069860 kernel: BTRFS info (device vda6): using free space tree Mar 4 00:59:23.944732 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:59:23.944783 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:59:24.112135 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 00:59:23.968771 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:59:23.988723 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:59:24.131468 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:59:24.255424 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:59:24.281932 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:59:24.335419 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:59:24.377326 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:59:24.843899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:59:24.898778 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:59:24.943882 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:59:24.976714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:59:25.005175 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:25.081201 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:59:25.178080 ignition[932]: INFO : Ignition 2.19.0 Mar 4 00:59:25.178080 ignition[932]: INFO : Stage: mount Mar 4 00:59:25.178080 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:25.178080 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:25.256243 ignition[932]: INFO : mount: mount passed Mar 4 00:59:25.256243 ignition[932]: INFO : Ignition finished successfully Mar 4 00:59:25.265516 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:59:25.352240 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:59:25.512484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:59:25.629430 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Mar 4 00:59:25.656642 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:25.656867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:25.663432 kernel: BTRFS info (device vda6): using free space tree Mar 4 00:59:25.717974 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 00:59:25.732102 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:59:25.904963 ignition[961]: INFO : Ignition 2.19.0 Mar 4 00:59:25.904963 ignition[961]: INFO : Stage: files Mar 4 00:59:25.904963 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:25.904963 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:25.973685 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:59:25.973685 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:59:25.973685 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:59:25.973685 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:59:26.039196 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:59:26.060298 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:59:26.060298 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 00:59:26.060298 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 00:59:26.040066 unknown[961]: wrote ssh authorized keys file for user: core Mar 4 00:59:26.279899 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 00:59:26.534645 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 00:59:26.534645 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:59:26.563345 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:59:26.563345 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:59:26.563345 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:59:26.563345 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:59:26.563345 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:59:26.563345 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:59:26.563345 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:59:26.713476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:59:26.713476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:59:26.713476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 00:59:26.713476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 00:59:26.713476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 00:59:26.713476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 4 00:59:27.045273 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 00:59:28.483184 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 00:59:28.483184 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 4 00:59:28.520439 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 4 00:59:28.661198 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 00:59:28.702486 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 00:59:28.702486 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 4 00:59:28.702486 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:59:28.702486 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:59:28.702486 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:59:28.702486 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:59:28.702486 ignition[961]: INFO : files: files passed Mar 4 00:59:28.702486 ignition[961]: INFO : Ignition finished successfully Mar 4 00:59:28.790745 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:59:28.818109 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:59:28.837249 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:59:28.870604 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Mar 4 00:59:28.882459 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:59:28.882909 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:59:28.940304 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:59:28.940304 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:59:28.898323 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:59:28.973519 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:59:28.900801 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:59:28.952392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:59:29.044933 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:59:29.048867 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:59:29.060283 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:59:29.100657 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:59:29.115636 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:59:29.134968 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:59:29.215362 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:59:29.256431 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:59:29.327476 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:59:29.349288 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:59:29.352924 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:59:29.390327 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:59:29.390739 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:59:29.411090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:59:29.428201 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:59:29.463816 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:59:29.495155 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:59:29.516869 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:59:29.550090 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:59:29.557311 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:59:29.570146 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:59:29.576254 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:59:29.590627 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:59:29.597795 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:59:29.598333 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:59:29.625221 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:59:29.634860 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:59:29.661965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:59:29.664449 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:59:29.678946 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:59:29.679273 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:59:29.698225 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:59:29.698496 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:59:29.713187 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:59:29.722866 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:59:29.726029 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:59:29.842744 ignition[1016]: INFO : Ignition 2.19.0 Mar 4 00:59:29.842744 ignition[1016]: INFO : Stage: umount Mar 4 00:59:29.842744 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:29.842744 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:29.842744 ignition[1016]: INFO : umount: umount passed Mar 4 00:59:29.842744 ignition[1016]: INFO : Ignition finished successfully Mar 4 00:59:29.726388 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:59:29.732878 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:59:29.738469 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:59:29.739010 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:59:29.745726 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:59:29.745937 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:59:29.746456 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:59:29.746899 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:59:29.747486 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:59:29.747744 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:59:29.788030 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:59:29.811973 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:59:29.826227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:59:29.826719 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:59:29.862984 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:59:29.863387 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:59:29.971688 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:59:29.976436 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:59:29.976703 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:59:29.992460 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:59:29.992787 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:59:30.011163 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:59:30.011433 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:59:30.037458 systemd[1]: Stopped target network.target - Network. Mar 4 00:59:30.052401 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:59:30.052712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:59:30.071405 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:59:30.077776 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:59:30.092785 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:59:30.092955 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:59:30.100241 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:59:30.106031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:59:30.131348 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:59:30.131677 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:59:30.159859 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:59:30.174894 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:59:30.190929 systemd-networkd[788]: eth0: DHCPv6 lease lost Mar 4 00:59:30.198343 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:59:30.198719 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:59:30.214019 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:59:30.214860 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:59:30.241683 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:59:30.241795 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:59:30.270803 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:59:30.285968 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:59:30.286825 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:59:30.314277 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:59:30.314388 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:59:30.332253 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:59:30.332418 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:59:30.358481 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:59:30.358786 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:59:30.368923 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:59:30.431254 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:59:30.445313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:59:30.469205 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:59:30.476110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:59:30.502667 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:59:30.502839 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:59:30.512486 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:59:30.512770 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:59:30.539363 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:59:30.539501 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:59:30.552799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:59:30.552930 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:59:30.566773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:59:30.566902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:59:30.634997 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:59:30.651889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:59:30.652038 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:59:30.682456 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 00:59:30.682743 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:59:30.717444 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:59:30.717925 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:59:30.741191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:59:30.741366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:59:30.756262 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:59:30.777404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:59:30.821425 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:59:30.863254 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:59:30.910346 systemd[1]: Switching root. Mar 4 00:59:30.969292 systemd-journald[194]: Journal stopped Mar 4 00:59:33.843629 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 4 00:59:33.843709 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 00:59:33.843772 kernel: SELinux: policy capability open_perms=1 Mar 4 00:59:33.843828 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 00:59:33.843840 kernel: SELinux: policy capability always_check_network=0 Mar 4 00:59:33.843852 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 00:59:33.843863 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 00:59:33.843874 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 00:59:33.843891 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 00:59:33.843902 kernel: audit: type=1403 audit(1772585971.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 00:59:33.843914 systemd[1]: Successfully loaded SELinux policy in 140.608ms. Mar 4 00:59:33.843971 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.325ms. Mar 4 00:59:33.843985 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:59:33.843997 systemd[1]: Detected virtualization kvm. Mar 4 00:59:33.844009 systemd[1]: Detected architecture x86-64. Mar 4 00:59:33.844020 systemd[1]: Detected first boot. Mar 4 00:59:33.844033 systemd[1]: Initializing machine ID from VM UUID. Mar 4 00:59:33.844045 zram_generator::config[1061]: No configuration found. Mar 4 00:59:33.844140 systemd[1]: Populated /etc with preset unit settings. Mar 4 00:59:33.844160 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 00:59:33.844242 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 00:59:33.844263 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 00:59:33.844280 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 00:59:33.844292 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 00:59:33.844304 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 00:59:33.844316 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 00:59:33.844327 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 00:59:33.844339 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 00:59:33.844401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 00:59:33.844414 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 00:59:33.844426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:59:33.844438 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:59:33.844450 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 00:59:33.844462 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 00:59:33.844474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 00:59:33.844485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:59:33.844498 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 00:59:33.844619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:59:33.844635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 00:59:33.844646 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 00:59:33.844659 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 00:59:33.844671 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 00:59:33.844683 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:59:33.844697 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:59:33.844708 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:59:33.844760 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:59:33.844773 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 00:59:33.844785 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 00:59:33.844796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:59:33.844808 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:59:33.844820 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:59:33.844832 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 00:59:33.844843 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 00:59:33.844855 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 00:59:33.844902 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 00:59:33.844914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:33.844926 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 00:59:33.844938 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 00:59:33.844950 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 00:59:33.844962 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 00:59:33.844973 systemd[1]: Reached target machines.target - Containers. Mar 4 00:59:33.844985 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 00:59:33.845032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:59:33.845046 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:59:33.845059 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 00:59:33.845110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:59:33.845133 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:59:33.845152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:59:33.845169 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 00:59:33.845190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:59:33.845209 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 00:59:33.845281 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 00:59:33.845295 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 00:59:33.845306 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 00:59:33.845318 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 00:59:33.845330 kernel: fuse: init (API version 7.39) Mar 4 00:59:33.845341 kernel: ACPI: bus type drm_connector registered Mar 4 00:59:33.845353 kernel: loop: module loaded Mar 4 00:59:33.845364 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:59:33.845376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:59:33.845429 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 00:59:33.845441 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 00:59:33.845478 systemd-journald[1145]: Collecting audit messages is disabled. Mar 4 00:59:33.845499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:59:33.845512 systemd-journald[1145]: Journal started Mar 4 00:59:33.845627 systemd-journald[1145]: Runtime Journal (/run/log/journal/128c2dc296d34674a132743b26c84110) is 6.0M, max 48.4M, 42.3M free. Mar 4 00:59:32.902802 systemd[1]: Queued start job for default target multi-user.target. Mar 4 00:59:32.934832 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 00:59:32.935907 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 00:59:32.936851 systemd[1]: systemd-journald.service: Consumed 2.534s CPU time. Mar 4 00:59:33.863843 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 00:59:33.863941 systemd[1]: Stopped verity-setup.service. Mar 4 00:59:33.875805 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:33.883748 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:59:33.896747 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 00:59:33.903710 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 00:59:33.912869 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 00:59:33.919853 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 00:59:33.928057 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 00:59:33.938370 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 00:59:33.944758 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 00:59:33.952946 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:59:33.960711 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 00:59:33.961016 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 00:59:33.968231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:59:33.968649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:59:33.975365 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:59:33.975802 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:59:33.981854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:59:33.982174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:59:33.992037 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 00:59:33.992387 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 00:59:33.998760 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:59:33.999128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:59:34.005382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:59:34.013902 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 00:59:34.023027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 00:59:34.033448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:59:34.064330 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 00:59:34.093407 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 00:59:34.103231 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 00:59:34.110406 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 00:59:34.110520 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:59:34.117770 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 00:59:34.128348 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 00:59:34.138457 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 00:59:34.145055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:59:34.149675 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 00:59:34.158348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 00:59:34.164972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:59:34.167761 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 00:59:34.174712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:59:34.183390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:59:34.199243 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 00:59:34.208839 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:59:34.339438 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 00:59:34.353682 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 00:59:34.372318 systemd-journald[1145]: Time spent on flushing to /var/log/journal/128c2dc296d34674a132743b26c84110 is 204.881ms for 946 entries. Mar 4 00:59:34.372318 systemd-journald[1145]: System Journal (/var/log/journal/128c2dc296d34674a132743b26c84110) is 8.0M, max 195.6M, 187.6M free. Mar 4 00:59:34.628214 systemd-journald[1145]: Received client request to flush runtime journal. Mar 4 00:59:34.628309 kernel: loop0: detected capacity change from 0 to 142488 Mar 4 00:59:34.372506 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 00:59:34.389512 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 00:59:34.401433 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 00:59:34.423386 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 00:59:34.583822 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 00:59:34.636243 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 00:59:34.758387 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 00:59:34.762333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 00:59:34.765885 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 00:59:34.778824 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 00:59:34.792772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:59:34.802832 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Mar 4 00:59:34.803363 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Mar 4 00:59:34.823474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:59:34.836710 kernel: loop1: detected capacity change from 0 to 228704 Mar 4 00:59:34.842785 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 00:59:34.905070 kernel: loop2: detected capacity change from 0 to 140768 Mar 4 00:59:34.931832 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 00:59:34.954982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:59:35.057690 kernel: loop3: detected capacity change from 0 to 142488 Mar 4 00:59:35.138727 kernel: loop4: detected capacity change from 0 to 228704 Mar 4 00:59:35.239697 kernel: loop5: detected capacity change from 0 to 140768 Mar 4 00:59:35.247515 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 4 00:59:35.248804 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 4 00:59:35.272504 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:59:35.282971 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 4 00:59:35.284678 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 4 00:59:35.292214 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 00:59:35.292402 systemd[1]: Reloading... Mar 4 00:59:35.569625 zram_generator::config[1231]: No configuration found. Mar 4 00:59:36.122384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:59:36.139480 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 00:59:36.311469 systemd[1]: Reloading finished in 1017 ms. Mar 4 00:59:36.349189 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 00:59:36.354722 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 00:59:36.376167 systemd[1]: Starting ensure-sysext.service... Mar 4 00:59:36.382173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:59:36.392278 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Mar 4 00:59:36.392300 systemd[1]: Reloading... Mar 4 00:59:36.813606 zram_generator::config[1301]: No configuration found. Mar 4 00:59:36.834041 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 00:59:36.834809 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 00:59:36.836447 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 00:59:36.836904 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Mar 4 00:59:36.837036 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Mar 4 00:59:36.843663 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:59:36.843708 systemd-tmpfiles[1266]: Skipping /boot Mar 4 00:59:36.867276 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:59:36.867304 systemd-tmpfiles[1266]: Skipping /boot Mar 4 00:59:37.258275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:59:37.481793 systemd[1]: Reloading finished in 1088 ms. Mar 4 00:59:37.510232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 00:59:37.529385 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:59:37.561669 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:59:37.573472 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 00:59:37.608779 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 00:59:37.643694 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:59:37.652807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:59:37.669691 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 00:59:37.681906 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:37.682092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:59:37.709816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:59:37.719444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:59:37.740020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:59:37.747416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:59:37.747714 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:37.749424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:59:37.749761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:59:37.772212 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 00:59:37.784429 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 00:59:37.786394 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Mar 4 00:59:37.806470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:59:37.807294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:59:37.810066 augenrules[1356]: No rules Mar 4 00:59:37.815186 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:59:37.822012 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:59:37.822417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:59:37.843086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:37.843914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:59:37.863815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:59:37.883887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:59:37.913830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:59:37.929225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:59:37.938730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:59:37.949265 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 00:59:37.974354 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 00:59:37.982854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:37.985023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:59:38.008443 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 00:59:38.017496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:59:38.017876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:59:38.028056 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:59:38.034750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1385) Mar 4 00:59:38.039998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:59:38.049049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:59:38.049447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:59:38.062731 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:59:38.063206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:59:38.100637 systemd[1]: Finished ensure-sysext.service. Mar 4 00:59:38.113173 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 00:59:38.350321 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 00:59:38.364794 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:59:38.371903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:59:38.372083 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:59:38.376189 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 00:59:38.386336 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 00:59:38.386817 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 00:59:38.814641 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 4 00:59:38.834686 kernel: ACPI: button: Power Button [PWRF] Mar 4 00:59:39.152241 systemd-resolved[1340]: Positive Trust Anchors: Mar 4 00:59:39.152930 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:59:39.152963 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:59:39.160895 systemd-resolved[1340]: Defaulting to hostname 'linux'. Mar 4 00:59:39.182262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:59:39.182655 systemd-networkd[1404]: lo: Link UP Mar 4 00:59:39.182662 systemd-networkd[1404]: lo: Gained carrier Mar 4 00:59:39.199176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:59:39.200705 systemd-networkd[1404]: Enumeration completed Mar 4 00:59:39.203215 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:39.203265 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:59:39.205946 systemd-networkd[1404]: eth0: Link UP Mar 4 00:59:39.205955 systemd-networkd[1404]: eth0: Gained carrier Mar 4 00:59:39.205977 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:39.206924 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 00:59:39.241750 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:59:39.257680 systemd[1]: Reached target network.target - Network. Mar 4 00:59:39.262752 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:59:39.262820 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 00:59:39.269807 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 00:59:39.276503 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 4 00:59:39.879725 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 4 00:59:39.881036 systemd-timesyncd[1408]: Initial clock synchronization to Wed 2026-03-04 00:59:39.878686 UTC. Mar 4 00:59:39.886683 systemd-resolved[1340]: Clock change detected. Flushing caches. Mar 4 00:59:39.898193 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 00:59:39.907484 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 4 00:59:39.907524 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 00:59:39.919756 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 00:59:39.924179 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 00:59:39.938492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 00:59:39.954326 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 00:59:39.959594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 00:59:40.280899 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 00:59:40.611547 kernel: kvm_amd: TSC scaling supported Mar 4 00:59:40.612196 kernel: kvm_amd: Nested Virtualization enabled Mar 4 00:59:40.612486 kernel: kvm_amd: Nested Paging enabled Mar 4 00:59:40.612609 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 4 00:59:40.612720 kernel: kvm_amd: PMU virtualization is disabled Mar 4 00:59:40.800382 kernel: EDAC MC: Ver: 3.0.0 Mar 4 00:59:40.867562 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 00:59:40.989438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:59:41.025350 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 00:59:41.077455 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:59:41.235801 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 00:59:41.248994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:59:41.265787 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:59:41.273605 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 00:59:41.281621 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 00:59:41.296030 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 00:59:41.305385 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 00:59:41.312475 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 00:59:41.318753 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 00:59:41.320103 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:59:41.324423 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:59:41.330187 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 00:59:41.339138 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 00:59:41.351736 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 00:59:41.359039 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 00:59:41.365088 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 00:59:41.401623 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:59:41.410461 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:59:41.429477 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:59:41.429754 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:59:41.437441 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:59:41.452929 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 00:59:41.540357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 00:59:41.550364 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 00:59:41.562575 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 00:59:41.570167 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 00:59:41.574070 jq[1438]: false Mar 4 00:59:41.574558 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 00:59:41.583665 systemd-networkd[1404]: eth0: Gained IPv6LL Mar 4 00:59:41.606561 dbus-daemon[1437]: [system] SELinux support is enabled Mar 4 00:59:41.612636 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 00:59:41.637568 extend-filesystems[1439]: Found loop3 Mar 4 00:59:41.637568 extend-filesystems[1439]: Found loop4 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found loop5 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found sr0 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda1 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda2 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda3 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found usr Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda4 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda6 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda7 Mar 4 00:59:41.657199 extend-filesystems[1439]: Found vda9 Mar 4 00:59:41.657199 extend-filesystems[1439]: Checking size of /dev/vda9 Mar 4 00:59:41.810078 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1378) Mar 4 00:59:41.810128 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 4 00:59:41.637733 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 00:59:41.810948 extend-filesystems[1439]: Resized partition /dev/vda9 Mar 4 00:59:41.728718 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 00:59:41.817618 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Mar 4 00:59:41.846698 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 00:59:41.879637 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 00:59:41.885336 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 4 00:59:41.883116 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 00:59:41.909632 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 00:59:42.201292 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 00:59:42.211647 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 00:59:42.211647 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 4 00:59:42.211647 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 4 00:59:42.211601 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 00:59:42.264728 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Mar 4 00:59:42.226407 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 00:59:42.270434 jq[1460]: true Mar 4 00:59:42.271132 update_engine[1458]: I20260304 00:59:42.264564 1458 main.cc:92] Flatcar Update Engine starting Mar 4 00:59:42.271132 update_engine[1458]: I20260304 00:59:42.270797 1458 update_check_scheduler.cc:74] Next update check in 10m24s Mar 4 00:59:42.283308 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 00:59:42.369703 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 00:59:42.370153 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 00:59:42.370920 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 00:59:42.371200 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 00:59:42.378058 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 00:59:42.378495 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 00:59:42.384717 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 00:59:42.385015 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 00:59:42.412895 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Mar 4 00:59:42.412933 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 00:59:42.420511 systemd-logind[1456]: New seat seat0. Mar 4 00:59:42.428718 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 00:59:42.434780 jq[1465]: true Mar 4 00:59:42.443143 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 00:59:42.473329 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 4 00:59:42.480940 tar[1464]: linux-amd64/LICENSE Mar 4 00:59:42.482404 tar[1464]: linux-amd64/helm Mar 4 00:59:42.494544 systemd[1]: Started update-engine.service - Update Engine. Mar 4 00:59:42.506599 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 00:59:42.525805 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 4 00:59:42.547451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:59:42.559575 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 00:59:42.565739 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 00:59:42.566107 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 00:59:42.575732 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 00:59:42.576489 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 00:59:42.609562 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 00:59:42.614805 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 00:59:42.653426 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Mar 4 00:59:42.658534 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 00:59:42.670653 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 00:59:42.686580 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 4 00:59:42.687034 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 4 00:59:42.857086 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 00:59:42.931102 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 00:59:43.071621 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 00:59:43.104513 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 00:59:43.212635 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 00:59:43.217158 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 00:59:43.371999 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 00:59:43.500597 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 00:59:43.905758 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 00:59:43.952150 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 00:59:43.970827 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 00:59:44.004813 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 00:59:46.098464 containerd[1466]: time="2026-03-04T00:59:46.096179703Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 00:59:46.562421 containerd[1466]: time="2026-03-04T00:59:46.561361641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.571072752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.571184330Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.571329531Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.571718387Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.571802353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.571981498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.572003208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.572752016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.572780780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.572804204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:46.573505 containerd[1466]: time="2026-03-04T00:59:46.572822337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:46.579817 containerd[1466]: time="2026-03-04T00:59:46.578419264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:46.579817 containerd[1466]: time="2026-03-04T00:59:46.579575723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:46.580057 containerd[1466]: time="2026-03-04T00:59:46.579830458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:46.580057 containerd[1466]: time="2026-03-04T00:59:46.579856617Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 00:59:46.580120 containerd[1466]: time="2026-03-04T00:59:46.580071569Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 00:59:46.581163 containerd[1466]: time="2026-03-04T00:59:46.580164342Z" level=info msg="metadata content store policy set" policy=shared Mar 4 00:59:46.857094 containerd[1466]: time="2026-03-04T00:59:46.855955096Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 00:59:46.857436 containerd[1466]: time="2026-03-04T00:59:46.857095004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 00:59:46.857436 containerd[1466]: time="2026-03-04T00:59:46.857188820Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 00:59:46.857436 containerd[1466]: time="2026-03-04T00:59:46.857326897Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 00:59:46.857436 containerd[1466]: time="2026-03-04T00:59:46.857359748Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 00:59:46.858818 containerd[1466]: time="2026-03-04T00:59:46.857859741Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 00:59:46.859745 containerd[1466]: time="2026-03-04T00:59:46.859364721Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 00:59:46.860522 containerd[1466]: time="2026-03-04T00:59:46.860391898Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 00:59:46.860522 containerd[1466]: time="2026-03-04T00:59:46.860483629Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 00:59:46.860522 containerd[1466]: time="2026-03-04T00:59:46.860509067Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 00:59:46.860646 containerd[1466]: time="2026-03-04T00:59:46.860530286Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860680 containerd[1466]: time="2026-03-04T00:59:46.860648216Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860680 containerd[1466]: time="2026-03-04T00:59:46.860675187Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860796 containerd[1466]: time="2026-03-04T00:59:46.860695775Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860796 containerd[1466]: time="2026-03-04T00:59:46.860720351Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860796 containerd[1466]: time="2026-03-04T00:59:46.860789770Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860952 containerd[1466]: time="2026-03-04T00:59:46.860810840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860952 containerd[1466]: time="2026-03-04T00:59:46.860831158Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 00:59:46.860952 containerd[1466]: time="2026-03-04T00:59:46.860860613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861038 containerd[1466]: time="2026-03-04T00:59:46.860960309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861038 containerd[1466]: time="2026-03-04T00:59:46.860994713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861038 containerd[1466]: time="2026-03-04T00:59:46.861016103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861140215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861174028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861195588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861385613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861412763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861435896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861454792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861474879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861549689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861580326Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861666236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.861706 containerd[1466]: time="2026-03-04T00:59:46.861702855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.862189 containerd[1466]: time="2026-03-04T00:59:46.861722732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.862845378Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.863000677Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.863024121Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.863045821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.863063004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.863094002Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.863167669Z" level=info msg="NRI interface is disabled by configuration." Mar 4 00:59:46.864050 containerd[1466]: time="2026-03-04T00:59:46.863190512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 00:59:46.864759 containerd[1466]: time="2026-03-04T00:59:46.864583812Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 00:59:46.865117 containerd[1466]: time="2026-03-04T00:59:46.864858335Z" level=info msg="Connect containerd service" Mar 4 00:59:46.865152 containerd[1466]: time="2026-03-04T00:59:46.865124010Z" level=info msg="using legacy CRI server" Mar 4 00:59:46.865152 containerd[1466]: time="2026-03-04T00:59:46.865143717Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 00:59:46.867074 containerd[1466]: time="2026-03-04T00:59:46.866712476Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 00:59:46.870074 containerd[1466]: time="2026-03-04T00:59:46.869630252Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 00:59:46.873369 containerd[1466]: time="2026-03-04T00:59:46.870186360Z" level=info msg="Start subscribing containerd event" Mar 4 00:59:46.873369 containerd[1466]: time="2026-03-04T00:59:46.871064629Z" level=info msg="Start recovering state" Mar 4 00:59:46.873369 containerd[1466]: time="2026-03-04T00:59:46.871968386Z" level=info msg="Start event monitor" Mar 4 00:59:46.873369 containerd[1466]: time="2026-03-04T00:59:46.871993182Z" level=info msg="Start snapshots syncer" Mar 4 00:59:46.873369 containerd[1466]: time="2026-03-04T00:59:46.872010424Z" level=info msg="Start cni network conf syncer for default" Mar 4 00:59:46.873369 containerd[1466]: time="2026-03-04T00:59:46.872024280Z" level=info msg="Start streaming server" Mar 4 00:59:46.884736 containerd[1466]: time="2026-03-04T00:59:46.880394964Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 00:59:46.884736 containerd[1466]: time="2026-03-04T00:59:46.882128848Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 00:59:46.884736 containerd[1466]: time="2026-03-04T00:59:46.882433235Z" level=info msg="containerd successfully booted in 0.802826s" Mar 4 00:59:46.884504 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 00:59:47.351535 tar[1464]: linux-amd64/README.md Mar 4 00:59:47.395476 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 00:59:49.908180 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 00:59:49.919784 systemd[1]: Started sshd@0-10.0.0.41:22-10.0.0.1:35072.service - OpenSSH per-connection server daemon (10.0.0.1:35072). Mar 4 00:59:50.325688 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 35072 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 00:59:50.334648 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:59:50.365508 systemd-logind[1456]: New session 1 of user core. Mar 4 00:59:50.368113 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 00:59:50.383153 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 00:59:50.589641 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 00:59:50.707961 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 00:59:50.741014 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 00:59:51.322856 systemd[1555]: Queued start job for default target default.target. Mar 4 00:59:51.346564 systemd[1555]: Created slice app.slice - User Application Slice. Mar 4 00:59:51.346656 systemd[1555]: Reached target paths.target - Paths. Mar 4 00:59:51.346681 systemd[1555]: Reached target timers.target - Timers. Mar 4 00:59:51.349734 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 00:59:51.378443 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 00:59:51.378649 systemd[1555]: Reached target sockets.target - Sockets. Mar 4 00:59:51.378668 systemd[1555]: Reached target basic.target - Basic System. Mar 4 00:59:51.378714 systemd[1555]: Reached target default.target - Main User Target. Mar 4 00:59:51.378762 systemd[1555]: Startup finished in 605ms. Mar 4 00:59:51.379160 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 00:59:51.392707 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 00:59:51.586698 systemd[1]: Started sshd@1-10.0.0.41:22-10.0.0.1:35082.service - OpenSSH per-connection server daemon (10.0.0.1:35082). Mar 4 00:59:51.852846 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 35082 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 00:59:51.857632 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:59:51.867474 systemd-logind[1456]: New session 2 of user core. Mar 4 00:59:51.875555 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 00:59:52.154539 sshd[1566]: pam_unix(sshd:session): session closed for user core Mar 4 00:59:52.159715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:59:52.165173 systemd[1]: sshd@1-10.0.0.41:22-10.0.0.1:35082.service: Deactivated successfully. Mar 4 00:59:52.166886 (kubelet)[1575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:59:52.167491 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 00:59:52.169400 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Mar 4 00:59:52.169588 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 00:59:52.172148 systemd[1]: Started sshd@2-10.0.0.41:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Mar 4 00:59:52.172607 systemd[1]: Startup finished in 3.736s (kernel) + 22.093s (initrd) + 20.333s (userspace) = 46.163s. Mar 4 00:59:52.173886 systemd-logind[1456]: Removed session 2. Mar 4 00:59:52.226348 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 00:59:52.231115 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:59:52.239856 systemd-logind[1456]: New session 3 of user core. Mar 4 00:59:52.245660 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 00:59:52.423182 sshd[1579]: pam_unix(sshd:session): session closed for user core Mar 4 00:59:52.430891 systemd[1]: sshd@2-10.0.0.41:22-10.0.0.1:35098.service: Deactivated successfully. Mar 4 00:59:52.435026 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 00:59:52.436717 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Mar 4 00:59:52.439085 systemd-logind[1456]: Removed session 3. Mar 4 00:59:56.147133 kubelet[1575]: E0304 00:59:56.146742 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:59:56.154065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:59:56.154519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:59:56.155377 systemd[1]: kubelet.service: Consumed 10.195s CPU time. Mar 4 01:00:02.444566 systemd[1]: Started sshd@3-10.0.0.41:22-10.0.0.1:53656.service - OpenSSH per-connection server daemon (10.0.0.1:53656). Mar 4 01:00:02.494325 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 53656 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:02.497794 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:02.505863 systemd-logind[1456]: New session 4 of user core. Mar 4 01:00:02.516639 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:00:02.581207 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:02.598974 systemd[1]: sshd@3-10.0.0.41:22-10.0.0.1:53656.service: Deactivated successfully. Mar 4 01:00:02.601153 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:00:02.603441 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:00:02.605360 systemd[1]: Started sshd@4-10.0.0.41:22-10.0.0.1:53666.service - OpenSSH per-connection server daemon (10.0.0.1:53666). Mar 4 01:00:02.606791 systemd-logind[1456]: Removed session 4. Mar 4 01:00:02.675397 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 53666 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:02.678125 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:02.686095 systemd-logind[1456]: New session 5 of user core. Mar 4 01:00:02.693529 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:00:02.753910 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:02.766972 systemd[1]: sshd@4-10.0.0.41:22-10.0.0.1:53666.service: Deactivated successfully. Mar 4 01:00:02.769167 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:00:02.771138 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:00:02.773086 systemd[1]: Started sshd@5-10.0.0.41:22-10.0.0.1:53682.service - OpenSSH per-connection server daemon (10.0.0.1:53682). Mar 4 01:00:02.774524 systemd-logind[1456]: Removed session 5. Mar 4 01:00:02.818050 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 53682 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:02.820281 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:02.826832 systemd-logind[1456]: New session 6 of user core. Mar 4 01:00:02.833537 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:00:02.902721 sshd[1607]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:02.920486 systemd[1]: sshd@5-10.0.0.41:22-10.0.0.1:53682.service: Deactivated successfully. Mar 4 01:00:02.922528 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:00:02.924179 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:00:02.932843 systemd[1]: Started sshd@6-10.0.0.41:22-10.0.0.1:53692.service - OpenSSH per-connection server daemon (10.0.0.1:53692). Mar 4 01:00:02.934561 systemd-logind[1456]: Removed session 6. Mar 4 01:00:02.990527 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 53692 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:02.992644 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:02.998525 systemd-logind[1456]: New session 7 of user core. Mar 4 01:00:03.013486 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:00:03.093961 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:00:03.094617 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:03.117346 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:03.120390 sshd[1614]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:03.133075 systemd[1]: sshd@6-10.0.0.41:22-10.0.0.1:53692.service: Deactivated successfully. Mar 4 01:00:03.135176 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:00:03.137188 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:00:03.139181 systemd[1]: Started sshd@7-10.0.0.41:22-10.0.0.1:53704.service - OpenSSH per-connection server daemon (10.0.0.1:53704). Mar 4 01:00:03.140497 systemd-logind[1456]: Removed session 7. Mar 4 01:00:03.192732 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 53704 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:03.196591 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:03.205611 systemd-logind[1456]: New session 8 of user core. Mar 4 01:00:03.287512 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:00:03.359379 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:00:03.360061 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:03.367676 sudo[1626]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:03.379686 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:00:03.380378 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:03.420553 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:00:03.424314 auditctl[1629]: No rules Mar 4 01:00:03.424984 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:00:03.425513 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:00:03.430485 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:00:03.480179 augenrules[1647]: No rules Mar 4 01:00:03.482344 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:00:03.483867 sudo[1625]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:03.490651 sshd[1622]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:03.504178 systemd[1]: sshd@7-10.0.0.41:22-10.0.0.1:53704.service: Deactivated successfully. Mar 4 01:00:03.506706 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:00:03.508082 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:00:03.517868 systemd[1]: Started sshd@8-10.0.0.41:22-10.0.0.1:53716.service - OpenSSH per-connection server daemon (10.0.0.1:53716). Mar 4 01:00:03.519602 systemd-logind[1456]: Removed session 8. Mar 4 01:00:03.559069 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 53716 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:03.561427 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:03.568750 systemd-logind[1456]: New session 9 of user core. Mar 4 01:00:03.578477 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:00:03.643594 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:00:03.644281 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:06.407639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:00:06.435719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:07.542835 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:00:07.548763 (dockerd)[1680]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:00:08.215593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:08.232775 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:08.748868 kubelet[1685]: E0304 01:00:08.748652 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:08.757765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:08.758170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:08.759153 systemd[1]: kubelet.service: Consumed 2.138s CPU time. Mar 4 01:00:11.213960 dockerd[1680]: time="2026-03-04T01:00:11.213512217Z" level=info msg="Starting up" Mar 4 01:00:12.578434 dockerd[1680]: time="2026-03-04T01:00:12.573644433Z" level=info msg="Loading containers: start." Mar 4 01:00:13.146769 kernel: Initializing XFRM netlink socket Mar 4 01:00:13.829758 systemd-networkd[1404]: docker0: Link UP Mar 4 01:00:13.874326 dockerd[1680]: time="2026-03-04T01:00:13.873943244Z" level=info msg="Loading containers: done." Mar 4 01:00:13.954801 dockerd[1680]: time="2026-03-04T01:00:13.954640936Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:00:13.955302 dockerd[1680]: time="2026-03-04T01:00:13.954992608Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:00:13.955675 dockerd[1680]: time="2026-03-04T01:00:13.955492623Z" level=info msg="Daemon has completed initialization" Mar 4 01:00:14.240592 dockerd[1680]: time="2026-03-04T01:00:14.237914987Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:00:14.239986 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:00:18.827459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:00:18.838614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:19.167809 containerd[1466]: time="2026-03-04T01:00:19.167751597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 4 01:00:19.680981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:19.711469 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:20.829506 kubelet[1850]: E0304 01:00:20.829037 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:20.837583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:20.837970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:20.838737 systemd[1]: kubelet.service: Consumed 1.911s CPU time. Mar 4 01:00:21.066758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870950191.mount: Deactivated successfully. Mar 4 01:00:27.078704 containerd[1466]: time="2026-03-04T01:00:27.078335118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:27.080595 containerd[1466]: time="2026-03-04T01:00:27.080266707Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 4 01:00:27.082664 containerd[1466]: time="2026-03-04T01:00:27.082577001Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:27.094919 containerd[1466]: time="2026-03-04T01:00:27.093868188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:27.102622 containerd[1466]: time="2026-03-04T01:00:27.102537653Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 7.934724413s" Mar 4 01:00:27.102811 containerd[1466]: time="2026-03-04T01:00:27.102626179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 4 01:00:27.107605 containerd[1466]: time="2026-03-04T01:00:27.107524437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 4 01:00:28.026443 update_engine[1458]: I20260304 01:00:28.025831 1458 update_attempter.cc:509] Updating boot flags... Mar 4 01:00:28.218473 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1923) Mar 4 01:00:28.381168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1927) Mar 4 01:00:31.077975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 01:00:31.108869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:31.654671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:31.663173 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:31.907914 containerd[1466]: time="2026-03-04T01:00:31.906927326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:31.910272 containerd[1466]: time="2026-03-04T01:00:31.909712032Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 4 01:00:31.911358 containerd[1466]: time="2026-03-04T01:00:31.911271325Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:31.917946 containerd[1466]: time="2026-03-04T01:00:31.917743984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:31.920072 containerd[1466]: time="2026-03-04T01:00:31.919820342Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 4.812250902s" Mar 4 01:00:31.920072 containerd[1466]: time="2026-03-04T01:00:31.919916011Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 4 01:00:31.924182 containerd[1466]: time="2026-03-04T01:00:31.924013839Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 4 01:00:32.240590 kubelet[1942]: E0304 01:00:32.239701 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:32.247976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:32.248427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:32.249031 systemd[1]: kubelet.service: Consumed 1.242s CPU time. Mar 4 01:00:34.425959 containerd[1466]: time="2026-03-04T01:00:34.425748624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:34.427885 containerd[1466]: time="2026-03-04T01:00:34.427163730Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 4 01:00:34.429154 containerd[1466]: time="2026-03-04T01:00:34.429053380Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:34.433336 containerd[1466]: time="2026-03-04T01:00:34.433193998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:34.434711 containerd[1466]: time="2026-03-04T01:00:34.434628695Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.510496105s" Mar 4 01:00:34.434711 containerd[1466]: time="2026-03-04T01:00:34.434692574Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 4 01:00:34.438532 containerd[1466]: time="2026-03-04T01:00:34.438438344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 4 01:00:36.455453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033627862.mount: Deactivated successfully. Mar 4 01:00:39.359761 containerd[1466]: time="2026-03-04T01:00:39.359347715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:39.361583 containerd[1466]: time="2026-03-04T01:00:39.360670473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 4 01:00:39.362410 containerd[1466]: time="2026-03-04T01:00:39.362343989Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:39.367202 containerd[1466]: time="2026-03-04T01:00:39.367060218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:39.368672 containerd[1466]: time="2026-03-04T01:00:39.368542135Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 4.930066832s" Mar 4 01:00:39.368672 containerd[1466]: time="2026-03-04T01:00:39.368658622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 4 01:00:39.372935 containerd[1466]: time="2026-03-04T01:00:39.372743154Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 4 01:00:40.446651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85351602.mount: Deactivated successfully. Mar 4 01:00:42.328781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 4 01:00:42.336666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:42.852728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:43.440005 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:44.773343 kubelet[2024]: E0304 01:00:44.772857 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:44.781297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:44.781586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:44.782334 systemd[1]: kubelet.service: Consumed 3.198s CPU time. Mar 4 01:00:45.277970 containerd[1466]: time="2026-03-04T01:00:45.277587835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:45.277970 containerd[1466]: time="2026-03-04T01:00:45.278376667Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 4 01:00:45.280872 containerd[1466]: time="2026-03-04T01:00:45.280769349Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:45.285850 containerd[1466]: time="2026-03-04T01:00:45.285753369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:45.288641 containerd[1466]: time="2026-03-04T01:00:45.288472943Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.915643548s" Mar 4 01:00:45.288641 containerd[1466]: time="2026-03-04T01:00:45.288623354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 4 01:00:45.296694 containerd[1466]: time="2026-03-04T01:00:45.294732411Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 4 01:00:46.063899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271318729.mount: Deactivated successfully. Mar 4 01:00:46.078905 containerd[1466]: time="2026-03-04T01:00:46.078811626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:46.080477 containerd[1466]: time="2026-03-04T01:00:46.080257621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 4 01:00:46.082148 containerd[1466]: time="2026-03-04T01:00:46.082101922Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:46.090155 containerd[1466]: time="2026-03-04T01:00:46.089071523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:46.095952 containerd[1466]: time="2026-03-04T01:00:46.095584919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 800.736803ms" Mar 4 01:00:46.095952 containerd[1466]: time="2026-03-04T01:00:46.095763733Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 4 01:00:46.102481 containerd[1466]: time="2026-03-04T01:00:46.102376262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 4 01:00:47.222950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588703916.mount: Deactivated successfully. Mar 4 01:00:52.573993 containerd[1466]: time="2026-03-04T01:00:52.573625307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:52.573993 containerd[1466]: time="2026-03-04T01:00:52.574108821Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 4 01:00:52.577158 containerd[1466]: time="2026-03-04T01:00:52.577091091Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:52.603648 containerd[1466]: time="2026-03-04T01:00:52.603320010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:52.605027 containerd[1466]: time="2026-03-04T01:00:52.604940675Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 6.502475396s" Mar 4 01:00:52.605070 containerd[1466]: time="2026-03-04T01:00:52.605051061Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 4 01:00:54.828130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 4 01:00:54.843659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:55.324331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:55.330405 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:55.536685 kubelet[2133]: E0304 01:00:55.536403 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:55.543518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:55.543759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:58.355698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:58.378895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:58.447432 systemd[1]: Reloading requested from client PID 2149 ('systemctl') (unit session-9.scope)... Mar 4 01:00:58.447573 systemd[1]: Reloading... Mar 4 01:00:58.589890 zram_generator::config[2191]: No configuration found. Mar 4 01:00:58.759137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:00:58.878065 systemd[1]: Reloading finished in 429 ms. Mar 4 01:00:58.947694 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 01:00:58.947867 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 01:00:58.948514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:58.960845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:59.164607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:59.171177 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:00:59.373948 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:00:59.373948 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:00:59.373948 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:00:59.375078 kubelet[2237]: I0304 01:00:59.374012 2237 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:01:00.309580 kubelet[2237]: I0304 01:01:00.308986 2237 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:01:00.309580 kubelet[2237]: I0304 01:01:00.309535 2237 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:01:00.311654 kubelet[2237]: I0304 01:01:00.311371 2237 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:01:00.367454 kubelet[2237]: E0304 01:01:00.367355 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:01:00.369896 kubelet[2237]: I0304 01:01:00.369616 2237 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:01:00.584016 kubelet[2237]: E0304 01:01:00.583915 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:01:00.584016 kubelet[2237]: I0304 01:01:00.583979 2237 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:01:00.614205 kubelet[2237]: I0304 01:01:00.614110 2237 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:01:00.615026 kubelet[2237]: I0304 01:01:00.614946 2237 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:01:00.615508 kubelet[2237]: I0304 01:01:00.615038 2237 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:01:00.615733 kubelet[2237]: I0304 01:01:00.615552 2237 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:01:00.615733 kubelet[2237]: I0304 01:01:00.615567 2237 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:01:00.616064 kubelet[2237]: I0304 01:01:00.616007 2237 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:00.622177 kubelet[2237]: I0304 01:01:00.622055 2237 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:01:00.622177 kubelet[2237]: I0304 01:01:00.622133 2237 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:01:00.622416 kubelet[2237]: I0304 01:01:00.622359 2237 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:01:00.622623 kubelet[2237]: I0304 01:01:00.622476 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:01:00.627095 kubelet[2237]: I0304 01:01:00.627016 2237 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:01:00.628370 kubelet[2237]: I0304 01:01:00.628195 2237 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:01:00.630100 kubelet[2237]: E0304 01:01:00.629914 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:00.630100 kubelet[2237]: E0304 01:01:00.629915 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:00.631072 kubelet[2237]: W0304 01:01:00.630976 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:01:00.716207 kubelet[2237]: I0304 01:01:00.716019 2237 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:01:00.717474 kubelet[2237]: I0304 01:01:00.716565 2237 server.go:1289] "Started kubelet" Mar 4 01:01:00.720393 kubelet[2237]: I0304 01:01:00.719386 2237 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:01:00.720393 kubelet[2237]: I0304 01:01:00.720173 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:01:00.721067 kubelet[2237]: I0304 01:01:00.720711 2237 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:01:00.721765 kubelet[2237]: I0304 01:01:00.721737 2237 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:01:00.722026 kubelet[2237]: I0304 01:01:00.721884 2237 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:01:00.728851 kubelet[2237]: I0304 01:01:00.728718 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:01:00.729447 kubelet[2237]: I0304 01:01:00.729384 2237 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:01:00.730140 kubelet[2237]: E0304 01:01:00.729768 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:00.731517 kubelet[2237]: I0304 01:01:00.731436 2237 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:01:00.731873 kubelet[2237]: I0304 01:01:00.731809 2237 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:01:00.733879 kubelet[2237]: E0304 01:01:00.733816 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:00.733945 kubelet[2237]: E0304 01:01:00.733928 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="200ms" Mar 4 01:01:00.735337 kubelet[2237]: I0304 01:01:00.734991 2237 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:01:00.735337 kubelet[2237]: I0304 01:01:00.735126 2237 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:01:00.750583 kubelet[2237]: I0304 01:01:00.748905 2237 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:01:00.750583 kubelet[2237]: E0304 01:01:00.749039 2237 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:01:00.750583 kubelet[2237]: E0304 01:01:00.748459 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997da172cc4ac1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:01:00.716198593 +0000 UTC m=+1.538994558,LastTimestamp:2026-03-04 01:01:00.716198593 +0000 UTC m=+1.538994558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:01:00.831003 kubelet[2237]: E0304 01:01:00.830804 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:00.856121 kubelet[2237]: I0304 01:01:00.855443 2237 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:01:00.856121 kubelet[2237]: I0304 01:01:00.855494 2237 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:01:00.856121 kubelet[2237]: I0304 01:01:00.855658 2237 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:00.866551 kubelet[2237]: I0304 01:01:00.866453 2237 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:01:00.869384 kubelet[2237]: I0304 01:01:00.869325 2237 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:01:00.869685 kubelet[2237]: I0304 01:01:00.869617 2237 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:01:00.869795 kubelet[2237]: I0304 01:01:00.869781 2237 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:01:00.869909 kubelet[2237]: I0304 01:01:00.869862 2237 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:01:00.870181 kubelet[2237]: E0304 01:01:00.870063 2237 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:01:00.935596 kubelet[2237]: E0304 01:01:00.935162 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:00.935596 kubelet[2237]: I0304 01:01:00.935409 2237 policy_none.go:49] "None policy: Start" Mar 4 01:01:00.935596 kubelet[2237]: I0304 01:01:00.935633 2237 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:01:00.935596 kubelet[2237]: I0304 01:01:00.935755 2237 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:01:00.951140 kubelet[2237]: E0304 01:01:00.950687 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:00.952906 kubelet[2237]: E0304 01:01:00.951892 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="400ms" Mar 4 01:01:00.974605 kubelet[2237]: E0304 01:01:00.973942 2237 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 01:01:00.990499 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 01:01:01.015922 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 01:01:01.022699 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 01:01:01.036006 kubelet[2237]: E0304 01:01:01.035920 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:01.036729 kubelet[2237]: E0304 01:01:01.036668 2237 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:01:01.037484 kubelet[2237]: I0304 01:01:01.037459 2237 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:01:01.038089 kubelet[2237]: I0304 01:01:01.037578 2237 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:01:01.038491 kubelet[2237]: I0304 01:01:01.038380 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:01:01.041930 kubelet[2237]: E0304 01:01:01.041872 2237 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:01:01.042551 kubelet[2237]: E0304 01:01:01.042467 2237 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:01:01.244329 kubelet[2237]: I0304 01:01:01.242072 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b9f7e59646c72dc3156f8dc0cfb582f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b9f7e59646c72dc3156f8dc0cfb582f\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:01.244329 kubelet[2237]: I0304 01:01:01.242182 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b9f7e59646c72dc3156f8dc0cfb582f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b9f7e59646c72dc3156f8dc0cfb582f\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:01.244329 kubelet[2237]: I0304 01:01:01.242339 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b9f7e59646c72dc3156f8dc0cfb582f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b9f7e59646c72dc3156f8dc0cfb582f\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:01.257466 kubelet[2237]: I0304 01:01:01.256538 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:01.258650 kubelet[2237]: E0304 01:01:01.258196 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Mar 4 01:01:01.331606 systemd[1]: Created slice kubepods-burstable-pod4b9f7e59646c72dc3156f8dc0cfb582f.slice - libcontainer container kubepods-burstable-pod4b9f7e59646c72dc3156f8dc0cfb582f.slice. Mar 4 01:01:01.348466 kubelet[2237]: I0304 01:01:01.346749 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:01.351738 kubelet[2237]: I0304 01:01:01.350147 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:01.351738 kubelet[2237]: I0304 01:01:01.350369 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:01.351738 kubelet[2237]: I0304 01:01:01.350436 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:01.351738 kubelet[2237]: I0304 01:01:01.350599 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:01.351738 kubelet[2237]: I0304 01:01:01.350624 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:01.354878 kubelet[2237]: E0304 01:01:01.354744 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="800ms" Mar 4 01:01:01.355464 kubelet[2237]: E0304 01:01:01.355394 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:01.356972 kubelet[2237]: E0304 01:01:01.356704 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:01.361594 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 4 01:01:01.362941 containerd[1466]: time="2026-03-04T01:01:01.362892200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b9f7e59646c72dc3156f8dc0cfb582f,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:01.371046 kubelet[2237]: E0304 01:01:01.370926 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:01.375917 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 4 01:01:01.380333 kubelet[2237]: E0304 01:01:01.380040 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:01.465324 kubelet[2237]: I0304 01:01:01.465030 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:01.466516 kubelet[2237]: E0304 01:01:01.466080 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Mar 4 01:01:01.630622 kubelet[2237]: E0304 01:01:01.630476 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:01.680957 kubelet[2237]: E0304 01:01:01.679189 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:01.744684 kubelet[2237]: E0304 01:01:01.742538 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:01.782977 containerd[1466]: time="2026-03-04T01:01:01.782654029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:01.784308 kubelet[2237]: E0304 01:01:01.784082 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:01.784401 containerd[1466]: time="2026-03-04T01:01:01.782671825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:01.943748 kubelet[2237]: E0304 01:01:01.930047 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:02.004850 kubelet[2237]: I0304 01:01:02.004716 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:02.015029 kubelet[2237]: E0304 01:01:02.014205 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Mar 4 01:01:02.016925 kubelet[2237]: E0304 01:01:02.016539 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997da172cc4ac1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:01:00.716198593 +0000 UTC m=+1.538994558,LastTimestamp:2026-03-04 01:01:00.716198593 +0000 UTC m=+1.538994558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:01:02.079490 kubelet[2237]: E0304 01:01:02.079196 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:02.168179 kubelet[2237]: E0304 01:01:02.167814 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="1.6s" Mar 4 01:01:02.277713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520602233.mount: Deactivated successfully. Mar 4 01:01:02.289594 containerd[1466]: time="2026-03-04T01:01:02.288797379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:02.310648 containerd[1466]: time="2026-03-04T01:01:02.310522216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 4 01:01:02.312805 containerd[1466]: time="2026-03-04T01:01:02.312605638Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:02.314572 containerd[1466]: time="2026-03-04T01:01:02.314404444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:02.316008 containerd[1466]: time="2026-03-04T01:01:02.315963553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:01:02.317998 containerd[1466]: time="2026-03-04T01:01:02.317848941Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:02.319133 containerd[1466]: time="2026-03-04T01:01:02.319036420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:01:02.325333 containerd[1466]: time="2026-03-04T01:01:02.325056252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:02.326503 containerd[1466]: time="2026-03-04T01:01:02.326415682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.953819ms" Mar 4 01:01:02.327377 containerd[1466]: time="2026-03-04T01:01:02.327180626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 963.12219ms" Mar 4 01:01:02.330028 containerd[1466]: time="2026-03-04T01:01:02.329901680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.809713ms" Mar 4 01:01:02.480573 kubelet[2237]: E0304 01:01:02.479905 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:01:02.900398 kubelet[2237]: I0304 01:01:02.899963 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:02.901978 kubelet[2237]: E0304 01:01:02.901438 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Mar 4 01:01:03.209146 containerd[1466]: time="2026-03-04T01:01:03.181349881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:03.209146 containerd[1466]: time="2026-03-04T01:01:03.182717925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:03.209146 containerd[1466]: time="2026-03-04T01:01:03.183141394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:03.209146 containerd[1466]: time="2026-03-04T01:01:03.184013361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:03.248852 containerd[1466]: time="2026-03-04T01:01:03.220755459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:03.248852 containerd[1466]: time="2026-03-04T01:01:03.221154294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:03.248852 containerd[1466]: time="2026-03-04T01:01:03.221168109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:03.248852 containerd[1466]: time="2026-03-04T01:01:03.221670147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:03.663609 systemd[1]: Started cri-containerd-7514bf3ea278d8671d2c849e0cf0b57720dd94578b3c1fa91abf670814b3462a.scope - libcontainer container 7514bf3ea278d8671d2c849e0cf0b57720dd94578b3c1fa91abf670814b3462a. Mar 4 01:01:03.673845 systemd[1]: Started cri-containerd-5376e701a4b5cfbef8a9283d11ef76c4437a4217ad6d527b899274796d7a074f.scope - libcontainer container 5376e701a4b5cfbef8a9283d11ef76c4437a4217ad6d527b899274796d7a074f. Mar 4 01:01:03.857922 kubelet[2237]: E0304 01:01:03.857343 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="3.2s" Mar 4 01:01:03.862788 kubelet[2237]: E0304 01:01:03.862030 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:03.868179 containerd[1466]: time="2026-03-04T01:01:03.866500826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:03.868179 containerd[1466]: time="2026-03-04T01:01:03.867694144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:03.868179 containerd[1466]: time="2026-03-04T01:01:03.867719481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:03.868179 containerd[1466]: time="2026-03-04T01:01:03.868022797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:04.177287 kubelet[2237]: E0304 01:01:04.177103 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:04.177974 kubelet[2237]: E0304 01:01:04.177760 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:04.350019 containerd[1466]: time="2026-03-04T01:01:04.349045041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"7514bf3ea278d8671d2c849e0cf0b57720dd94578b3c1fa91abf670814b3462a\"" Mar 4 01:01:04.357069 kubelet[2237]: E0304 01:01:04.356849 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:04.369386 containerd[1466]: time="2026-03-04T01:01:04.369067216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b9f7e59646c72dc3156f8dc0cfb582f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5376e701a4b5cfbef8a9283d11ef76c4437a4217ad6d527b899274796d7a074f\"" Mar 4 01:01:04.372562 containerd[1466]: time="2026-03-04T01:01:04.372349551Z" level=info msg="CreateContainer within sandbox \"7514bf3ea278d8671d2c849e0cf0b57720dd94578b3c1fa91abf670814b3462a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:01:04.372879 kubelet[2237]: E0304 01:01:04.372599 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:04.381382 containerd[1466]: time="2026-03-04T01:01:04.380943246Z" level=info msg="CreateContainer within sandbox \"5376e701a4b5cfbef8a9283d11ef76c4437a4217ad6d527b899274796d7a074f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:01:04.452781 systemd[1]: run-containerd-runc-k8s.io-5928cc6591ef0ffd4f95c50317e41cab326c106deb6204966ea220c871bc3da7-runc.o73tsQ.mount: Deactivated successfully. Mar 4 01:01:04.517461 kubelet[2237]: E0304 01:01:04.517185 2237 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:04.518051 kubelet[2237]: I0304 01:01:04.517848 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:04.519366 kubelet[2237]: E0304 01:01:04.518511 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Mar 4 01:01:04.526417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201704046.mount: Deactivated successfully. Mar 4 01:01:04.537774 containerd[1466]: time="2026-03-04T01:01:04.537534870Z" level=info msg="CreateContainer within sandbox \"7514bf3ea278d8671d2c849e0cf0b57720dd94578b3c1fa91abf670814b3462a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1cd28318d396248730319e3683ad074080fe15b60b069fc0c11728307a3ed3e6\"" Mar 4 01:01:04.540642 containerd[1466]: time="2026-03-04T01:01:04.540512470Z" level=info msg="StartContainer for \"1cd28318d396248730319e3683ad074080fe15b60b069fc0c11728307a3ed3e6\"" Mar 4 01:01:04.540855 systemd[1]: Started cri-containerd-5928cc6591ef0ffd4f95c50317e41cab326c106deb6204966ea220c871bc3da7.scope - libcontainer container 5928cc6591ef0ffd4f95c50317e41cab326c106deb6204966ea220c871bc3da7. Mar 4 01:01:04.543752 containerd[1466]: time="2026-03-04T01:01:04.543586476Z" level=info msg="CreateContainer within sandbox \"5376e701a4b5cfbef8a9283d11ef76c4437a4217ad6d527b899274796d7a074f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a186e2a615d5157e96fe3256a9086b00c5a673ecb66a644b0421a21583c42d2\"" Mar 4 01:01:04.545009 containerd[1466]: time="2026-03-04T01:01:04.544582253Z" level=info msg="StartContainer for \"8a186e2a615d5157e96fe3256a9086b00c5a673ecb66a644b0421a21583c42d2\"" Mar 4 01:01:04.796795 systemd[1]: Started cri-containerd-8a186e2a615d5157e96fe3256a9086b00c5a673ecb66a644b0421a21583c42d2.scope - libcontainer container 8a186e2a615d5157e96fe3256a9086b00c5a673ecb66a644b0421a21583c42d2. Mar 4 01:01:04.833055 containerd[1466]: time="2026-03-04T01:01:04.832821433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"5928cc6591ef0ffd4f95c50317e41cab326c106deb6204966ea220c871bc3da7\"" Mar 4 01:01:04.834526 kubelet[2237]: E0304 01:01:04.834439 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:04.842286 containerd[1466]: time="2026-03-04T01:01:04.842139290Z" level=info msg="CreateContainer within sandbox \"5928cc6591ef0ffd4f95c50317e41cab326c106deb6204966ea220c871bc3da7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:01:05.042391 containerd[1466]: time="2026-03-04T01:01:05.041801924Z" level=info msg="CreateContainer within sandbox \"5928cc6591ef0ffd4f95c50317e41cab326c106deb6204966ea220c871bc3da7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"388954837e97f998bdf713171f8f50c3c30ac09a51267a77b358f0afd71a9fe0\"" Mar 4 01:01:05.047630 containerd[1466]: time="2026-03-04T01:01:05.047514665Z" level=info msg="StartContainer for \"388954837e97f998bdf713171f8f50c3c30ac09a51267a77b358f0afd71a9fe0\"" Mar 4 01:01:05.056828 systemd[1]: Started cri-containerd-1cd28318d396248730319e3683ad074080fe15b60b069fc0c11728307a3ed3e6.scope - libcontainer container 1cd28318d396248730319e3683ad074080fe15b60b069fc0c11728307a3ed3e6. Mar 4 01:01:05.280201 containerd[1466]: time="2026-03-04T01:01:05.279812590Z" level=info msg="StartContainer for \"8a186e2a615d5157e96fe3256a9086b00c5a673ecb66a644b0421a21583c42d2\" returns successfully" Mar 4 01:01:05.327270 kubelet[2237]: E0304 01:01:05.326018 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:05.328020 kubelet[2237]: E0304 01:01:05.327574 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:05.335624 systemd[1]: Started cri-containerd-388954837e97f998bdf713171f8f50c3c30ac09a51267a77b358f0afd71a9fe0.scope - libcontainer container 388954837e97f998bdf713171f8f50c3c30ac09a51267a77b358f0afd71a9fe0. Mar 4 01:01:05.558206 containerd[1466]: time="2026-03-04T01:01:05.515886363Z" level=info msg="StartContainer for \"1cd28318d396248730319e3683ad074080fe15b60b069fc0c11728307a3ed3e6\" returns successfully" Mar 4 01:01:05.600894 containerd[1466]: time="2026-03-04T01:01:05.600799555Z" level=info msg="StartContainer for \"388954837e97f998bdf713171f8f50c3c30ac09a51267a77b358f0afd71a9fe0\" returns successfully" Mar 4 01:01:06.532689 kubelet[2237]: E0304 01:01:06.532572 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:06.536500 kubelet[2237]: E0304 01:01:06.535172 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:06.539402 kubelet[2237]: E0304 01:01:06.539350 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:06.540455 kubelet[2237]: E0304 01:01:06.540401 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:06.542056 kubelet[2237]: E0304 01:01:06.542010 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:06.542433 kubelet[2237]: E0304 01:01:06.542288 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:07.705933 kubelet[2237]: E0304 01:01:07.705740 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:07.705933 kubelet[2237]: E0304 01:01:07.705859 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:07.705933 kubelet[2237]: E0304 01:01:07.706061 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:07.707825 kubelet[2237]: E0304 01:01:07.706138 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:07.707825 kubelet[2237]: E0304 01:01:07.706693 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:07.707825 kubelet[2237]: E0304 01:01:07.706876 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:07.727310 kubelet[2237]: I0304 01:01:07.723836 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:08.737857 kubelet[2237]: E0304 01:01:08.737622 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:08.737857 kubelet[2237]: E0304 01:01:08.737645 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:08.737857 kubelet[2237]: E0304 01:01:08.738170 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:08.741697 kubelet[2237]: E0304 01:01:08.738515 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:11.043974 kubelet[2237]: E0304 01:01:11.043692 2237 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:01:12.542500 kubelet[2237]: E0304 01:01:12.538748 2237 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 4 01:01:12.555604 kubelet[2237]: E0304 01:01:12.555417 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:12.556844 kubelet[2237]: E0304 01:01:12.556664 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:12.607047 kubelet[2237]: E0304 01:01:12.605581 2237 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:12.607047 kubelet[2237]: E0304 01:01:12.606631 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:12.615325 kubelet[2237]: I0304 01:01:12.612516 2237 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:01:12.615325 kubelet[2237]: E0304 01:01:12.612624 2237 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 4 01:01:12.631346 kubelet[2237]: I0304 01:01:12.631111 2237 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:12.755286 kubelet[2237]: E0304 01:01:12.754586 2237 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18997da172cc4ac1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:01:00.716198593 +0000 UTC m=+1.538994558,LastTimestamp:2026-03-04 01:01:00.716198593 +0000 UTC m=+1.538994558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:01:12.773945 kubelet[2237]: E0304 01:01:12.773889 2237 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:12.776292 kubelet[2237]: I0304 01:01:12.774187 2237 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:12.778515 kubelet[2237]: E0304 01:01:12.778343 2237 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:12.778515 kubelet[2237]: I0304 01:01:12.778431 2237 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:12.781297 kubelet[2237]: E0304 01:01:12.781185 2237 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:12.831422 kubelet[2237]: I0304 01:01:12.831265 2237 apiserver.go:52] "Watching apiserver" Mar 4 01:01:12.963502 kubelet[2237]: I0304 01:01:12.962007 2237 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:01:15.202356 systemd[1]: Reloading requested from client PID 2530 ('systemctl') (unit session-9.scope)... Mar 4 01:01:15.202431 systemd[1]: Reloading... Mar 4 01:01:15.307291 zram_generator::config[2569]: No configuration found. Mar 4 01:01:15.449960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:01:15.707278 systemd[1]: Reloading finished in 504 ms. Mar 4 01:01:15.850312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:15.898147 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:01:15.898921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:15.899073 systemd[1]: kubelet.service: Consumed 8.139s CPU time, 133.6M memory peak, 0B memory swap peak. Mar 4 01:01:15.928859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:16.689559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:16.690059 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:01:16.792623 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:01:16.792623 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:01:16.792623 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:01:16.793492 kubelet[2614]: I0304 01:01:16.792644 2614 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:01:16.819132 kubelet[2614]: I0304 01:01:16.818996 2614 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:01:16.819132 kubelet[2614]: I0304 01:01:16.819071 2614 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:01:16.819925 kubelet[2614]: I0304 01:01:16.819820 2614 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:01:16.821630 kubelet[2614]: I0304 01:01:16.821500 2614 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:01:16.827275 kubelet[2614]: I0304 01:01:16.825645 2614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:01:16.841144 kubelet[2614]: E0304 01:01:16.840996 2614 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:01:16.841144 kubelet[2614]: I0304 01:01:16.841091 2614 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:01:16.851440 kubelet[2614]: I0304 01:01:16.851191 2614 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:01:16.851911 kubelet[2614]: I0304 01:01:16.851793 2614 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:01:16.852072 kubelet[2614]: I0304 01:01:16.851866 2614 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:01:16.852072 kubelet[2614]: I0304 01:01:16.852069 2614 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:01:16.852490 kubelet[2614]: I0304 01:01:16.852081 2614 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:01:16.852490 kubelet[2614]: I0304 01:01:16.852133 2614 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:16.852743 kubelet[2614]: I0304 01:01:16.852599 2614 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:01:16.852743 kubelet[2614]: I0304 01:01:16.852670 2614 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:01:16.852743 kubelet[2614]: I0304 01:01:16.852698 2614 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:01:16.852743 kubelet[2614]: I0304 01:01:16.852715 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:01:16.857348 kubelet[2614]: I0304 01:01:16.855388 2614 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:01:16.858757 kubelet[2614]: I0304 01:01:16.858704 2614 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:01:16.866570 kubelet[2614]: I0304 01:01:16.865590 2614 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:01:16.866570 kubelet[2614]: I0304 01:01:16.865639 2614 server.go:1289] "Started kubelet" Mar 4 01:01:16.868167 kubelet[2614]: I0304 01:01:16.868047 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:01:16.875278 kubelet[2614]: I0304 01:01:16.875021 2614 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:01:16.877183 kubelet[2614]: I0304 01:01:16.877026 2614 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:01:16.879169 kubelet[2614]: E0304 01:01:16.879146 2614 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:01:16.879832 kubelet[2614]: I0304 01:01:16.879662 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:01:16.881335 kubelet[2614]: I0304 01:01:16.881319 2614 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:01:16.882769 kubelet[2614]: I0304 01:01:16.882298 2614 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:01:16.882769 kubelet[2614]: I0304 01:01:16.882483 2614 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:01:16.882769 kubelet[2614]: I0304 01:01:16.882603 2614 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:01:16.884583 kubelet[2614]: I0304 01:01:16.883658 2614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:01:16.886791 kubelet[2614]: I0304 01:01:16.886721 2614 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:01:16.887562 kubelet[2614]: I0304 01:01:16.887361 2614 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:01:16.893094 kubelet[2614]: I0304 01:01:16.893068 2614 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:01:16.923454 kubelet[2614]: I0304 01:01:16.923298 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:01:16.926488 kubelet[2614]: I0304 01:01:16.925792 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:01:16.926488 kubelet[2614]: I0304 01:01:16.925849 2614 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:01:16.926488 kubelet[2614]: I0304 01:01:16.925875 2614 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:01:16.926488 kubelet[2614]: I0304 01:01:16.925883 2614 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:01:16.926488 kubelet[2614]: E0304 01:01:16.925935 2614 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:01:16.983945 kubelet[2614]: I0304 01:01:16.983782 2614 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:01:16.983945 kubelet[2614]: I0304 01:01:16.983802 2614 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:01:16.983945 kubelet[2614]: I0304 01:01:16.983822 2614 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:16.984159 kubelet[2614]: I0304 01:01:16.984121 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 01:01:16.984159 kubelet[2614]: I0304 01:01:16.984136 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 01:01:16.984159 kubelet[2614]: I0304 01:01:16.984157 2614 policy_none.go:49] "None policy: Start" Mar 4 01:01:16.984368 kubelet[2614]: I0304 01:01:16.984169 2614 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:01:16.984368 kubelet[2614]: I0304 01:01:16.984183 2614 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:01:16.984368 kubelet[2614]: I0304 01:01:16.984344 2614 state_mem.go:75] "Updated machine memory state" Mar 4 01:01:16.996724 kubelet[2614]: E0304 01:01:16.996639 2614 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:01:16.996909 kubelet[2614]: I0304 01:01:16.996867 2614 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:01:16.996947 kubelet[2614]: I0304 01:01:16.996909 2614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:01:16.997716 kubelet[2614]: I0304 01:01:16.997633 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:01:17.001433 kubelet[2614]: E0304 01:01:16.999548 2614 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:01:17.029449 kubelet[2614]: I0304 01:01:17.028162 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:17.029449 kubelet[2614]: I0304 01:01:17.028623 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:17.029449 kubelet[2614]: I0304 01:01:17.028676 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:17.110342 kubelet[2614]: I0304 01:01:17.110089 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:17.124860 kubelet[2614]: I0304 01:01:17.124755 2614 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 4 01:01:17.125072 kubelet[2614]: I0304 01:01:17.124890 2614 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:01:17.184471 kubelet[2614]: I0304 01:01:17.184272 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b9f7e59646c72dc3156f8dc0cfb582f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b9f7e59646c72dc3156f8dc0cfb582f\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:17.184471 kubelet[2614]: I0304 01:01:17.184340 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:17.184471 kubelet[2614]: I0304 01:01:17.184364 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:17.184471 kubelet[2614]: I0304 01:01:17.184381 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:17.184471 kubelet[2614]: I0304 01:01:17.184437 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b9f7e59646c72dc3156f8dc0cfb582f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b9f7e59646c72dc3156f8dc0cfb582f\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:17.184763 kubelet[2614]: I0304 01:01:17.184455 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b9f7e59646c72dc3156f8dc0cfb582f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b9f7e59646c72dc3156f8dc0cfb582f\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:17.184763 kubelet[2614]: I0304 01:01:17.184474 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:17.184763 kubelet[2614]: I0304 01:01:17.184577 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:17.184763 kubelet[2614]: I0304 01:01:17.184596 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:17.347541 kubelet[2614]: E0304 01:01:17.347095 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:17.347541 kubelet[2614]: E0304 01:01:17.347121 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:17.347541 kubelet[2614]: E0304 01:01:17.347599 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:17.857331 kubelet[2614]: I0304 01:01:17.854804 2614 apiserver.go:52] "Watching apiserver" Mar 4 01:01:17.882825 kubelet[2614]: I0304 01:01:17.882677 2614 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:01:17.955691 kubelet[2614]: E0304 01:01:17.954975 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:17.955691 kubelet[2614]: I0304 01:01:17.954952 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:17.955691 kubelet[2614]: I0304 01:01:17.955536 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:17.964983 kubelet[2614]: E0304 01:01:17.964666 2614 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:17.965590 kubelet[2614]: E0304 01:01:17.965105 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:17.969849 kubelet[2614]: E0304 01:01:17.969645 2614 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:17.969849 kubelet[2614]: E0304 01:01:17.969784 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:18.029019 kubelet[2614]: I0304 01:01:18.028861 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.028797658 podStartE2EDuration="1.028797658s" podCreationTimestamp="2026-03-04 01:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:01:18.014341238 +0000 UTC m=+1.295629185" watchObservedRunningTime="2026-03-04 01:01:18.028797658 +0000 UTC m=+1.310085584" Mar 4 01:01:18.042948 kubelet[2614]: I0304 01:01:18.042770 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.042744286 podStartE2EDuration="1.042744286s" podCreationTimestamp="2026-03-04 01:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:01:18.029299625 +0000 UTC m=+1.310587562" watchObservedRunningTime="2026-03-04 01:01:18.042744286 +0000 UTC m=+1.324032233" Mar 4 01:01:18.058886 kubelet[2614]: I0304 01:01:18.058819 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.058746752 podStartE2EDuration="1.058746752s" podCreationTimestamp="2026-03-04 01:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:01:18.043182225 +0000 UTC m=+1.324470151" watchObservedRunningTime="2026-03-04 01:01:18.058746752 +0000 UTC m=+1.340034680" Mar 4 01:01:19.071982 kubelet[2614]: E0304 01:01:19.071616 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:19.075180 kubelet[2614]: E0304 01:01:19.073173 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:20.151598 kubelet[2614]: E0304 01:01:20.151097 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:20.580688 kubelet[2614]: E0304 01:01:20.580573 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:22.332508 kubelet[2614]: E0304 01:01:22.326100 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.356s" Mar 4 01:01:22.641873 kubelet[2614]: E0304 01:01:22.627833 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:22.658996 kubelet[2614]: E0304 01:01:22.658792 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:23.131426 kubelet[2614]: I0304 01:01:23.130895 2614 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:01:24.529406 containerd[1466]: time="2026-03-04T01:01:24.522448387Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:01:24.555833 kubelet[2614]: I0304 01:01:24.555728 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:01:24.947311 kubelet[2614]: E0304 01:01:24.946965 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.984s" Mar 4 01:01:24.958854 kubelet[2614]: E0304 01:01:24.958585 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:25.510380 systemd[1]: Created slice kubepods-besteffort-pod389b59e8_9ded_40c2_b1be_45a9f8143787.slice - libcontainer container kubepods-besteffort-pod389b59e8_9ded_40c2_b1be_45a9f8143787.slice. Mar 4 01:01:25.634902 systemd[1]: Created slice kubepods-besteffort-podda261640_2725_482c_a992_2a4b6c3a7d02.slice - libcontainer container kubepods-besteffort-podda261640_2725_482c_a992_2a4b6c3a7d02.slice. Mar 4 01:01:25.644627 kubelet[2614]: I0304 01:01:25.644503 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/389b59e8-9ded-40c2-b1be-45a9f8143787-kube-proxy\") pod \"kube-proxy-lx8jh\" (UID: \"389b59e8-9ded-40c2-b1be-45a9f8143787\") " pod="kube-system/kube-proxy-lx8jh" Mar 4 01:01:25.645058 kubelet[2614]: I0304 01:01:25.644689 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/389b59e8-9ded-40c2-b1be-45a9f8143787-xtables-lock\") pod \"kube-proxy-lx8jh\" (UID: \"389b59e8-9ded-40c2-b1be-45a9f8143787\") " pod="kube-system/kube-proxy-lx8jh" Mar 4 01:01:25.645058 kubelet[2614]: I0304 01:01:25.644801 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsdrx\" (UniqueName: \"kubernetes.io/projected/389b59e8-9ded-40c2-b1be-45a9f8143787-kube-api-access-qsdrx\") pod \"kube-proxy-lx8jh\" (UID: \"389b59e8-9ded-40c2-b1be-45a9f8143787\") " pod="kube-system/kube-proxy-lx8jh" Mar 4 01:01:25.645058 kubelet[2614]: I0304 01:01:25.644981 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/389b59e8-9ded-40c2-b1be-45a9f8143787-lib-modules\") pod \"kube-proxy-lx8jh\" (UID: \"389b59e8-9ded-40c2-b1be-45a9f8143787\") " pod="kube-system/kube-proxy-lx8jh" Mar 4 01:01:25.746301 kubelet[2614]: I0304 01:01:25.746167 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da261640-2725-482c-a992-2a4b6c3a7d02-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-njpbp\" (UID: \"da261640-2725-482c-a992-2a4b6c3a7d02\") " pod="tigera-operator/tigera-operator-6bf85f8dd-njpbp" Mar 4 01:01:25.746826 kubelet[2614]: I0304 01:01:25.746679 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws76v\" (UniqueName: \"kubernetes.io/projected/da261640-2725-482c-a992-2a4b6c3a7d02-kube-api-access-ws76v\") pod \"tigera-operator-6bf85f8dd-njpbp\" (UID: \"da261640-2725-482c-a992-2a4b6c3a7d02\") " pod="tigera-operator/tigera-operator-6bf85f8dd-njpbp" Mar 4 01:01:25.757184 kubelet[2614]: E0304 01:01:25.756855 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:25.821968 kubelet[2614]: E0304 01:01:25.821725 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:25.822877 containerd[1466]: time="2026-03-04T01:01:25.822733730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lx8jh,Uid:389b59e8-9ded-40c2-b1be-45a9f8143787,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:25.942377 containerd[1466]: time="2026-03-04T01:01:25.941917380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-njpbp,Uid:da261640-2725-482c-a992-2a4b6c3a7d02,Namespace:tigera-operator,Attempt:0,}" Mar 4 01:01:25.967956 containerd[1466]: time="2026-03-04T01:01:25.967196662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:25.967956 containerd[1466]: time="2026-03-04T01:01:25.967537182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:25.967956 containerd[1466]: time="2026-03-04T01:01:25.967556457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:25.967956 containerd[1466]: time="2026-03-04T01:01:25.967775783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:26.005073 containerd[1466]: time="2026-03-04T01:01:26.004494137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:26.005073 containerd[1466]: time="2026-03-04T01:01:26.004582621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:26.005073 containerd[1466]: time="2026-03-04T01:01:26.004647902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:26.005073 containerd[1466]: time="2026-03-04T01:01:26.004832945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:26.084640 systemd[1]: Started cri-containerd-07ffdd3520eac305d264c51e1906af1abc3583b2817ed62915e254ae1f123d15.scope - libcontainer container 07ffdd3520eac305d264c51e1906af1abc3583b2817ed62915e254ae1f123d15. Mar 4 01:01:26.106670 systemd[1]: Started cri-containerd-7161fe1124366e8dafd03bed087a1179b22602159919377150669bfb300a6688.scope - libcontainer container 7161fe1124366e8dafd03bed087a1179b22602159919377150669bfb300a6688. Mar 4 01:01:26.409801 containerd[1466]: time="2026-03-04T01:01:26.407709034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lx8jh,Uid:389b59e8-9ded-40c2-b1be-45a9f8143787,Namespace:kube-system,Attempt:0,} returns sandbox id \"7161fe1124366e8dafd03bed087a1179b22602159919377150669bfb300a6688\"" Mar 4 01:01:26.420293 kubelet[2614]: E0304 01:01:26.419871 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:26.529127 containerd[1466]: time="2026-03-04T01:01:26.529008036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-njpbp,Uid:da261640-2725-482c-a992-2a4b6c3a7d02,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"07ffdd3520eac305d264c51e1906af1abc3583b2817ed62915e254ae1f123d15\"" Mar 4 01:01:26.869938 containerd[1466]: time="2026-03-04T01:01:26.869597395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 01:01:27.093852 containerd[1466]: time="2026-03-04T01:01:27.093087047Z" level=info msg="CreateContainer within sandbox \"7161fe1124366e8dafd03bed087a1179b22602159919377150669bfb300a6688\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:01:27.150689 containerd[1466]: time="2026-03-04T01:01:27.150511289Z" level=info msg="CreateContainer within sandbox \"7161fe1124366e8dafd03bed087a1179b22602159919377150669bfb300a6688\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d374dd93071b0407b385f8c09d8db87952de9b3d71ed3502f4f445020fd6abaf\"" Mar 4 01:01:27.155139 containerd[1466]: time="2026-03-04T01:01:27.154661887Z" level=info msg="StartContainer for \"d374dd93071b0407b385f8c09d8db87952de9b3d71ed3502f4f445020fd6abaf\"" Mar 4 01:01:27.280580 systemd[1]: Started cri-containerd-d374dd93071b0407b385f8c09d8db87952de9b3d71ed3502f4f445020fd6abaf.scope - libcontainer container d374dd93071b0407b385f8c09d8db87952de9b3d71ed3502f4f445020fd6abaf. Mar 4 01:01:27.555921 containerd[1466]: time="2026-03-04T01:01:27.555443168Z" level=info msg="StartContainer for \"d374dd93071b0407b385f8c09d8db87952de9b3d71ed3502f4f445020fd6abaf\" returns successfully" Mar 4 01:01:28.130336 kubelet[2614]: E0304 01:01:28.130168 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:28.142830 kubelet[2614]: I0304 01:01:28.142555 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lx8jh" podStartSLOduration=3.142538411 podStartE2EDuration="3.142538411s" podCreationTimestamp="2026-03-04 01:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:01:28.141564757 +0000 UTC m=+11.422852684" watchObservedRunningTime="2026-03-04 01:01:28.142538411 +0000 UTC m=+11.423826338" Mar 4 01:01:29.133656 kubelet[2614]: E0304 01:01:29.133174 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:29.632335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406107019.mount: Deactivated successfully. Mar 4 01:01:31.045999 containerd[1466]: time="2026-03-04T01:01:31.045593607Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:31.049951 containerd[1466]: time="2026-03-04T01:01:31.047448855Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 4 01:01:31.051743 containerd[1466]: time="2026-03-04T01:01:31.051613831Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:31.069071 containerd[1466]: time="2026-03-04T01:01:31.068796190Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:31.080172 containerd[1466]: time="2026-03-04T01:01:31.079902474Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.210103755s" Mar 4 01:01:31.080172 containerd[1466]: time="2026-03-04T01:01:31.080163898Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 4 01:01:31.131847 containerd[1466]: time="2026-03-04T01:01:31.131674313Z" level=info msg="CreateContainer within sandbox \"07ffdd3520eac305d264c51e1906af1abc3583b2817ed62915e254ae1f123d15\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 01:01:31.153810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462288998.mount: Deactivated successfully. Mar 4 01:01:31.157828 containerd[1466]: time="2026-03-04T01:01:31.157652254Z" level=info msg="CreateContainer within sandbox \"07ffdd3520eac305d264c51e1906af1abc3583b2817ed62915e254ae1f123d15\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8d5e01017f8207176c728e2f5938f9cc4b293e334076b03b9f10de341326d1fd\"" Mar 4 01:01:31.159340 containerd[1466]: time="2026-03-04T01:01:31.159186652Z" level=info msg="StartContainer for \"8d5e01017f8207176c728e2f5938f9cc4b293e334076b03b9f10de341326d1fd\"" Mar 4 01:01:31.232551 systemd[1]: Started cri-containerd-8d5e01017f8207176c728e2f5938f9cc4b293e334076b03b9f10de341326d1fd.scope - libcontainer container 8d5e01017f8207176c728e2f5938f9cc4b293e334076b03b9f10de341326d1fd. Mar 4 01:01:31.284377 containerd[1466]: time="2026-03-04T01:01:31.284196707Z" level=info msg="StartContainer for \"8d5e01017f8207176c728e2f5938f9cc4b293e334076b03b9f10de341326d1fd\" returns successfully" Mar 4 01:01:39.430142 sudo[1658]: pam_unix(sudo:session): session closed for user root Mar 4 01:01:39.433784 sshd[1655]: pam_unix(sshd:session): session closed for user core Mar 4 01:01:39.453785 systemd[1]: sshd@8-10.0.0.41:22-10.0.0.1:53716.service: Deactivated successfully. Mar 4 01:01:39.463870 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:01:39.466665 systemd[1]: session-9.scope: Consumed 20.066s CPU time, 167.9M memory peak, 0B memory swap peak. Mar 4 01:01:39.470492 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:01:39.474026 systemd-logind[1456]: Removed session 9. Mar 4 01:01:40.485774 kubelet[2614]: I0304 01:01:40.485115 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-njpbp" podStartSLOduration=11.136160856 podStartE2EDuration="15.485086895s" podCreationTimestamp="2026-03-04 01:01:25 +0000 UTC" firstStartedPulling="2026-03-04 01:01:26.74453002 +0000 UTC m=+10.025817967" lastFinishedPulling="2026-03-04 01:01:31.093456079 +0000 UTC m=+14.374744006" observedRunningTime="2026-03-04 01:01:32.546484242 +0000 UTC m=+15.827772198" watchObservedRunningTime="2026-03-04 01:01:40.485086895 +0000 UTC m=+23.766374823" Mar 4 01:01:40.533474 systemd[1]: Created slice kubepods-besteffort-pod317bcadb_c210_4723_94fe_53dc80315789.slice - libcontainer container kubepods-besteffort-pod317bcadb_c210_4723_94fe_53dc80315789.slice. Mar 4 01:01:40.645617 kubelet[2614]: I0304 01:01:40.645533 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/317bcadb-c210-4723-94fe-53dc80315789-tigera-ca-bundle\") pod \"calico-typha-6fd46b9b89-5dz8q\" (UID: \"317bcadb-c210-4723-94fe-53dc80315789\") " pod="calico-system/calico-typha-6fd46b9b89-5dz8q" Mar 4 01:01:40.646402 kubelet[2614]: I0304 01:01:40.646150 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/317bcadb-c210-4723-94fe-53dc80315789-typha-certs\") pod \"calico-typha-6fd46b9b89-5dz8q\" (UID: \"317bcadb-c210-4723-94fe-53dc80315789\") " pod="calico-system/calico-typha-6fd46b9b89-5dz8q" Mar 4 01:01:40.646402 kubelet[2614]: I0304 01:01:40.646201 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pczcx\" (UniqueName: \"kubernetes.io/projected/317bcadb-c210-4723-94fe-53dc80315789-kube-api-access-pczcx\") pod \"calico-typha-6fd46b9b89-5dz8q\" (UID: \"317bcadb-c210-4723-94fe-53dc80315789\") " pod="calico-system/calico-typha-6fd46b9b89-5dz8q" Mar 4 01:01:40.735180 systemd[1]: Created slice kubepods-besteffort-poddfc414a7_47ff_4938_8240_5a85755f48e7.slice - libcontainer container kubepods-besteffort-poddfc414a7_47ff_4938_8240_5a85755f48e7.slice. Mar 4 01:01:40.829872 kubelet[2614]: E0304 01:01:40.829508 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:40.843889 kubelet[2614]: E0304 01:01:40.843840 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:40.845719 containerd[1466]: time="2026-03-04T01:01:40.845406023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd46b9b89-5dz8q,Uid:317bcadb-c210-4723-94fe-53dc80315789,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:40.847009 kubelet[2614]: I0304 01:01:40.846968 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-flexvol-driver-host\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847162 kubelet[2614]: I0304 01:01:40.847016 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-var-run-calico\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847162 kubelet[2614]: I0304 01:01:40.847042 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mx94\" (UniqueName: \"kubernetes.io/projected/dfc414a7-47ff-4938-8240-5a85755f48e7-kube-api-access-4mx94\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847162 kubelet[2614]: I0304 01:01:40.847085 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-cni-net-dir\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847162 kubelet[2614]: I0304 01:01:40.847108 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-bpffs\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847162 kubelet[2614]: I0304 01:01:40.847131 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc414a7-47ff-4938-8240-5a85755f48e7-tigera-ca-bundle\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847592 kubelet[2614]: I0304 01:01:40.847157 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-lib-modules\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847592 kubelet[2614]: I0304 01:01:40.847181 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dfc414a7-47ff-4938-8240-5a85755f48e7-node-certs\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847592 kubelet[2614]: I0304 01:01:40.847202 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-nodeproc\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847592 kubelet[2614]: I0304 01:01:40.847396 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-var-lib-calico\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847592 kubelet[2614]: I0304 01:01:40.847430 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-sys-fs\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847758 kubelet[2614]: I0304 01:01:40.847568 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-cni-log-dir\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847758 kubelet[2614]: I0304 01:01:40.847596 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-cni-bin-dir\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847758 kubelet[2614]: I0304 01:01:40.847618 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-policysync\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.847758 kubelet[2614]: I0304 01:01:40.847717 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfc414a7-47ff-4938-8240-5a85755f48e7-xtables-lock\") pod \"calico-node-ss68l\" (UID: \"dfc414a7-47ff-4938-8240-5a85755f48e7\") " pod="calico-system/calico-node-ss68l" Mar 4 01:01:40.953119 kubelet[2614]: I0304 01:01:40.951024 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e5349a34-0a7e-48e8-966b-ab286041115e-socket-dir\") pod \"csi-node-driver-2lkt8\" (UID: \"e5349a34-0a7e-48e8-966b-ab286041115e\") " pod="calico-system/csi-node-driver-2lkt8" Mar 4 01:01:40.955961 kubelet[2614]: I0304 01:01:40.954177 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e5349a34-0a7e-48e8-966b-ab286041115e-registration-dir\") pod \"csi-node-driver-2lkt8\" (UID: \"e5349a34-0a7e-48e8-966b-ab286041115e\") " pod="calico-system/csi-node-driver-2lkt8" Mar 4 01:01:40.955961 kubelet[2614]: I0304 01:01:40.954381 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/e5349a34-0a7e-48e8-966b-ab286041115e-kube-api-access-lkfqb\") pod \"csi-node-driver-2lkt8\" (UID: \"e5349a34-0a7e-48e8-966b-ab286041115e\") " pod="calico-system/csi-node-driver-2lkt8" Mar 4 01:01:40.955961 kubelet[2614]: I0304 01:01:40.954478 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5349a34-0a7e-48e8-966b-ab286041115e-kubelet-dir\") pod \"csi-node-driver-2lkt8\" (UID: \"e5349a34-0a7e-48e8-966b-ab286041115e\") " pod="calico-system/csi-node-driver-2lkt8" Mar 4 01:01:40.955961 kubelet[2614]: I0304 01:01:40.954528 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e5349a34-0a7e-48e8-966b-ab286041115e-varrun\") pod \"csi-node-driver-2lkt8\" (UID: \"e5349a34-0a7e-48e8-966b-ab286041115e\") " pod="calico-system/csi-node-driver-2lkt8" Mar 4 01:01:40.972091 kubelet[2614]: E0304 01:01:40.971810 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:40.973171 kubelet[2614]: W0304 01:01:40.972626 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:40.973830 kubelet[2614]: E0304 01:01:40.973720 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:40.977394 containerd[1466]: time="2026-03-04T01:01:40.975934936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:40.977394 containerd[1466]: time="2026-03-04T01:01:40.976071910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:40.977394 containerd[1466]: time="2026-03-04T01:01:40.976089452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:40.978542 containerd[1466]: time="2026-03-04T01:01:40.977869275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:41.009576 kubelet[2614]: E0304 01:01:41.008475 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.009576 kubelet[2614]: W0304 01:01:41.008596 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.009576 kubelet[2614]: E0304 01:01:41.008624 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.009576 kubelet[2614]: E0304 01:01:41.009087 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.009576 kubelet[2614]: W0304 01:01:41.009107 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.009576 kubelet[2614]: E0304 01:01:41.009131 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.044750 containerd[1466]: time="2026-03-04T01:01:41.044061025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ss68l,Uid:dfc414a7-47ff-4938-8240-5a85755f48e7,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:41.053721 systemd[1]: Started cri-containerd-5812905abfebe855941e23754b97712a13c00aaabcaf381c24528df021b65767.scope - libcontainer container 5812905abfebe855941e23754b97712a13c00aaabcaf381c24528df021b65767. Mar 4 01:01:41.056832 kubelet[2614]: E0304 01:01:41.056618 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.056832 kubelet[2614]: W0304 01:01:41.056662 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.056832 kubelet[2614]: E0304 01:01:41.056682 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.058564 kubelet[2614]: E0304 01:01:41.057494 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.058564 kubelet[2614]: W0304 01:01:41.057508 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.058564 kubelet[2614]: E0304 01:01:41.057520 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.058564 kubelet[2614]: E0304 01:01:41.058042 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.058564 kubelet[2614]: W0304 01:01:41.058054 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.058564 kubelet[2614]: E0304 01:01:41.058065 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.059206 kubelet[2614]: E0304 01:01:41.058662 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.059206 kubelet[2614]: W0304 01:01:41.058673 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.059206 kubelet[2614]: E0304 01:01:41.058682 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.059561 kubelet[2614]: E0304 01:01:41.059393 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.059561 kubelet[2614]: W0304 01:01:41.059466 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.059561 kubelet[2614]: E0304 01:01:41.059477 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.060444 kubelet[2614]: E0304 01:01:41.060117 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.060444 kubelet[2614]: W0304 01:01:41.060131 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.060444 kubelet[2614]: E0304 01:01:41.060141 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.062004 kubelet[2614]: E0304 01:01:41.061051 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.062004 kubelet[2614]: W0304 01:01:41.061066 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.062004 kubelet[2614]: E0304 01:01:41.061077 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.063388 kubelet[2614]: E0304 01:01:41.063145 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.063388 kubelet[2614]: W0304 01:01:41.063191 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.063388 kubelet[2614]: E0304 01:01:41.063208 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.064065 kubelet[2614]: E0304 01:01:41.064007 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.064182 kubelet[2614]: W0304 01:01:41.064138 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.064182 kubelet[2614]: E0304 01:01:41.064171 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.065370 kubelet[2614]: E0304 01:01:41.065170 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.065370 kubelet[2614]: W0304 01:01:41.065321 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.065370 kubelet[2614]: E0304 01:01:41.065336 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.066009 kubelet[2614]: E0304 01:01:41.065932 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.066009 kubelet[2614]: W0304 01:01:41.065944 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.066009 kubelet[2614]: E0304 01:01:41.065954 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.066844 kubelet[2614]: E0304 01:01:41.066764 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.066844 kubelet[2614]: W0304 01:01:41.066782 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.066844 kubelet[2614]: E0304 01:01:41.066796 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.067551 kubelet[2614]: E0304 01:01:41.067464 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.067638 kubelet[2614]: W0304 01:01:41.067597 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.067638 kubelet[2614]: E0304 01:01:41.067610 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.068291 kubelet[2614]: E0304 01:01:41.068143 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.068291 kubelet[2614]: W0304 01:01:41.068175 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.068390 kubelet[2614]: E0304 01:01:41.068205 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.068898 kubelet[2614]: E0304 01:01:41.068859 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.068898 kubelet[2614]: W0304 01:01:41.068894 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.069053 kubelet[2614]: E0304 01:01:41.068904 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.069577 kubelet[2614]: E0304 01:01:41.069532 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.069577 kubelet[2614]: W0304 01:01:41.069575 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.070063 kubelet[2614]: E0304 01:01:41.069587 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.070063 kubelet[2614]: E0304 01:01:41.070031 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.070063 kubelet[2614]: W0304 01:01:41.070041 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.070063 kubelet[2614]: E0304 01:01:41.070050 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.070649 kubelet[2614]: E0304 01:01:41.070640 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.070699 kubelet[2614]: W0304 01:01:41.070650 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.070699 kubelet[2614]: E0304 01:01:41.070662 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.071414 kubelet[2614]: E0304 01:01:41.071190 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.071414 kubelet[2614]: W0304 01:01:41.071337 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.071414 kubelet[2614]: E0304 01:01:41.071349 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.071873 kubelet[2614]: E0304 01:01:41.071688 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.071873 kubelet[2614]: W0304 01:01:41.071698 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.071873 kubelet[2614]: E0304 01:01:41.071707 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.072014 kubelet[2614]: E0304 01:01:41.071985 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.072014 kubelet[2614]: W0304 01:01:41.071994 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.072014 kubelet[2614]: E0304 01:01:41.072003 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.072735 kubelet[2614]: E0304 01:01:41.072630 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.072735 kubelet[2614]: W0304 01:01:41.072641 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.072735 kubelet[2614]: E0304 01:01:41.072650 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.073147 kubelet[2614]: E0304 01:01:41.073115 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.073147 kubelet[2614]: W0304 01:01:41.073130 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.073147 kubelet[2614]: E0304 01:01:41.073140 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.074415 kubelet[2614]: E0304 01:01:41.073782 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.074415 kubelet[2614]: W0304 01:01:41.073904 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.074415 kubelet[2614]: E0304 01:01:41.073915 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.076016 kubelet[2614]: E0304 01:01:41.075738 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.076016 kubelet[2614]: W0304 01:01:41.075749 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.076016 kubelet[2614]: E0304 01:01:41.075759 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.113571 kubelet[2614]: E0304 01:01:41.113534 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:41.113811 kubelet[2614]: W0304 01:01:41.113719 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:41.113811 kubelet[2614]: E0304 01:01:41.113756 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:41.148755 containerd[1466]: time="2026-03-04T01:01:41.148457516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:41.148755 containerd[1466]: time="2026-03-04T01:01:41.148557782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:41.148755 containerd[1466]: time="2026-03-04T01:01:41.148591605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:41.151425 containerd[1466]: time="2026-03-04T01:01:41.149468834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:41.173985 containerd[1466]: time="2026-03-04T01:01:41.173842223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd46b9b89-5dz8q,Uid:317bcadb-c210-4723-94fe-53dc80315789,Namespace:calico-system,Attempt:0,} returns sandbox id \"5812905abfebe855941e23754b97712a13c00aaabcaf381c24528df021b65767\"" Mar 4 01:01:41.179809 kubelet[2614]: E0304 01:01:41.179679 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:41.186204 containerd[1466]: time="2026-03-04T01:01:41.185844177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 01:01:41.186650 systemd[1]: Started cri-containerd-021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf.scope - libcontainer container 021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf. Mar 4 01:01:41.259812 containerd[1466]: time="2026-03-04T01:01:41.259606218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ss68l,Uid:dfc414a7-47ff-4938-8240-5a85755f48e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\"" Mar 4 01:01:41.942745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277419342.mount: Deactivated successfully. Mar 4 01:01:42.933539 kubelet[2614]: E0304 01:01:42.933196 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:43.029740 containerd[1466]: time="2026-03-04T01:01:43.029614260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:43.031127 containerd[1466]: time="2026-03-04T01:01:43.031002706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 4 01:01:43.032626 containerd[1466]: time="2026-03-04T01:01:43.032573632Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:43.036013 containerd[1466]: time="2026-03-04T01:01:43.035863853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:43.037637 containerd[1466]: time="2026-03-04T01:01:43.037398190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.851499723s" Mar 4 01:01:43.037771 containerd[1466]: time="2026-03-04T01:01:43.037648043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 4 01:01:43.047951 containerd[1466]: time="2026-03-04T01:01:43.047751998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 01:01:43.104440 containerd[1466]: time="2026-03-04T01:01:43.104205485Z" level=info msg="CreateContainer within sandbox \"5812905abfebe855941e23754b97712a13c00aaabcaf381c24528df021b65767\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 01:01:43.136587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843841936.mount: Deactivated successfully. Mar 4 01:01:43.244149 containerd[1466]: time="2026-03-04T01:01:43.243901965Z" level=info msg="CreateContainer within sandbox \"5812905abfebe855941e23754b97712a13c00aaabcaf381c24528df021b65767\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6b5e12ce367ad9a7063183a92a43e075dc76daa4141e8b5a516bc9b4f09a0e9a\"" Mar 4 01:01:43.245131 containerd[1466]: time="2026-03-04T01:01:43.245071727Z" level=info msg="StartContainer for \"6b5e12ce367ad9a7063183a92a43e075dc76daa4141e8b5a516bc9b4f09a0e9a\"" Mar 4 01:01:43.325754 systemd[1]: Started cri-containerd-6b5e12ce367ad9a7063183a92a43e075dc76daa4141e8b5a516bc9b4f09a0e9a.scope - libcontainer container 6b5e12ce367ad9a7063183a92a43e075dc76daa4141e8b5a516bc9b4f09a0e9a. Mar 4 01:01:43.475109 containerd[1466]: time="2026-03-04T01:01:43.474959035Z" level=info msg="StartContainer for \"6b5e12ce367ad9a7063183a92a43e075dc76daa4141e8b5a516bc9b4f09a0e9a\" returns successfully" Mar 4 01:01:43.533180 kubelet[2614]: E0304 01:01:43.533029 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:43.607814 kubelet[2614]: E0304 01:01:43.607655 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.607814 kubelet[2614]: W0304 01:01:43.607815 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.608036 kubelet[2614]: E0304 01:01:43.607855 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.608607 kubelet[2614]: E0304 01:01:43.608562 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.608607 kubelet[2614]: W0304 01:01:43.608581 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.608607 kubelet[2614]: E0304 01:01:43.608603 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.611785 kubelet[2614]: E0304 01:01:43.611057 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.611785 kubelet[2614]: W0304 01:01:43.611415 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.611785 kubelet[2614]: E0304 01:01:43.611545 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.612171 kubelet[2614]: E0304 01:01:43.612124 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.612358 kubelet[2614]: W0304 01:01:43.612175 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.612358 kubelet[2614]: E0304 01:01:43.612195 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.615450 kubelet[2614]: E0304 01:01:43.613884 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.615450 kubelet[2614]: W0304 01:01:43.613906 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.615450 kubelet[2614]: E0304 01:01:43.613925 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.615450 kubelet[2614]: E0304 01:01:43.614902 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.615450 kubelet[2614]: W0304 01:01:43.614916 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.615450 kubelet[2614]: E0304 01:01:43.614934 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.615693 kubelet[2614]: E0304 01:01:43.615630 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.615693 kubelet[2614]: W0304 01:01:43.615648 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.615693 kubelet[2614]: E0304 01:01:43.615664 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.616803 kubelet[2614]: E0304 01:01:43.616724 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.616803 kubelet[2614]: W0304 01:01:43.616792 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.616917 kubelet[2614]: E0304 01:01:43.616813 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.620817 kubelet[2614]: E0304 01:01:43.620552 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.620817 kubelet[2614]: W0304 01:01:43.620576 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.620817 kubelet[2614]: E0304 01:01:43.620596 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.621354 kubelet[2614]: E0304 01:01:43.621166 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.621421 kubelet[2614]: W0304 01:01:43.621206 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.621421 kubelet[2614]: E0304 01:01:43.621378 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.622900 kubelet[2614]: E0304 01:01:43.622628 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.622900 kubelet[2614]: W0304 01:01:43.622681 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.622900 kubelet[2614]: E0304 01:01:43.622695 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.625507 kubelet[2614]: E0304 01:01:43.623197 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.625507 kubelet[2614]: W0304 01:01:43.623403 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.625507 kubelet[2614]: E0304 01:01:43.623421 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.625507 kubelet[2614]: E0304 01:01:43.624762 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.625507 kubelet[2614]: W0304 01:01:43.624778 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.625507 kubelet[2614]: E0304 01:01:43.624791 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.625507 kubelet[2614]: E0304 01:01:43.625184 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.625507 kubelet[2614]: W0304 01:01:43.625198 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.625507 kubelet[2614]: E0304 01:01:43.625380 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.626132 kubelet[2614]: E0304 01:01:43.626055 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.626132 kubelet[2614]: W0304 01:01:43.626120 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.626132 kubelet[2614]: E0304 01:01:43.626136 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.628341 kubelet[2614]: E0304 01:01:43.627186 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.628341 kubelet[2614]: W0304 01:01:43.627204 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.628341 kubelet[2614]: E0304 01:01:43.627345 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.628778 kubelet[2614]: E0304 01:01:43.628700 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.628778 kubelet[2614]: W0304 01:01:43.628757 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.628778 kubelet[2614]: E0304 01:01:43.628775 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.629662 kubelet[2614]: E0304 01:01:43.629612 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.629662 kubelet[2614]: W0304 01:01:43.629629 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.629662 kubelet[2614]: E0304 01:01:43.629643 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.630200 kubelet[2614]: E0304 01:01:43.630133 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.630200 kubelet[2614]: W0304 01:01:43.630195 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.630471 kubelet[2614]: E0304 01:01:43.630408 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.631452 kubelet[2614]: E0304 01:01:43.631205 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.631526 kubelet[2614]: W0304 01:01:43.631454 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.631526 kubelet[2614]: E0304 01:01:43.631474 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.632365 kubelet[2614]: E0304 01:01:43.632085 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.632365 kubelet[2614]: W0304 01:01:43.632103 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.632365 kubelet[2614]: E0304 01:01:43.632118 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.632712 kubelet[2614]: E0304 01:01:43.632652 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.632712 kubelet[2614]: W0304 01:01:43.632707 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.632811 kubelet[2614]: E0304 01:01:43.632724 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.633474 kubelet[2614]: E0304 01:01:43.633161 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.633474 kubelet[2614]: W0304 01:01:43.633375 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.633474 kubelet[2614]: E0304 01:01:43.633394 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.633795 kubelet[2614]: E0304 01:01:43.633734 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.633849 kubelet[2614]: W0304 01:01:43.633792 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.633849 kubelet[2614]: E0304 01:01:43.633809 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.634639 kubelet[2614]: E0304 01:01:43.634584 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.634694 kubelet[2614]: W0304 01:01:43.634644 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.634694 kubelet[2614]: E0304 01:01:43.634665 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.635469 kubelet[2614]: E0304 01:01:43.635424 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.635469 kubelet[2614]: W0304 01:01:43.635445 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.635469 kubelet[2614]: E0304 01:01:43.635458 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.636166 kubelet[2614]: E0304 01:01:43.636006 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.636166 kubelet[2614]: W0304 01:01:43.636073 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.636166 kubelet[2614]: E0304 01:01:43.636090 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.636915 kubelet[2614]: E0304 01:01:43.636749 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.636915 kubelet[2614]: W0304 01:01:43.636807 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.636915 kubelet[2614]: E0304 01:01:43.636825 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.637790 kubelet[2614]: E0304 01:01:43.637629 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.637790 kubelet[2614]: W0304 01:01:43.637650 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.637790 kubelet[2614]: E0304 01:01:43.637666 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.638542 kubelet[2614]: E0304 01:01:43.638492 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.638542 kubelet[2614]: W0304 01:01:43.638510 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.638542 kubelet[2614]: E0304 01:01:43.638527 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.639610 kubelet[2614]: E0304 01:01:43.639536 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.639610 kubelet[2614]: W0304 01:01:43.639555 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.639610 kubelet[2614]: E0304 01:01:43.639569 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.640653 kubelet[2614]: E0304 01:01:43.640120 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.640653 kubelet[2614]: W0304 01:01:43.640133 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.640653 kubelet[2614]: E0304 01:01:43.640145 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:43.640905 kubelet[2614]: E0304 01:01:43.640848 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:01:43.640949 kubelet[2614]: W0304 01:01:43.640906 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:01:43.640949 kubelet[2614]: E0304 01:01:43.640924 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:01:44.105599 containerd[1466]: time="2026-03-04T01:01:44.105387913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:44.106779 containerd[1466]: time="2026-03-04T01:01:44.106675214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 4 01:01:44.108202 containerd[1466]: time="2026-03-04T01:01:44.108100230Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:44.112464 containerd[1466]: time="2026-03-04T01:01:44.112366858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:44.113039 containerd[1466]: time="2026-03-04T01:01:44.112923605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.065095847s" Mar 4 01:01:44.113039 containerd[1466]: time="2026-03-04T01:01:44.112989677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 4 01:01:44.121007 containerd[1466]: time="2026-03-04T01:01:44.120904820Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 01:01:44.144610 containerd[1466]: time="2026-03-04T01:01:44.144526937Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270\"" Mar 4 01:01:44.147074 containerd[1466]: time="2026-03-04T01:01:44.145483419Z" level=info msg="StartContainer for \"df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270\"" Mar 4 01:01:44.219776 systemd[1]: Started cri-containerd-df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270.scope - libcontainer container df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270. Mar 4 01:01:44.276668 containerd[1466]: time="2026-03-04T01:01:44.276511676Z" level=info msg="StartContainer for \"df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270\" returns successfully" Mar 4 01:01:44.307621 systemd[1]: cri-containerd-df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270.scope: Deactivated successfully. Mar 4 01:01:44.485935 containerd[1466]: time="2026-03-04T01:01:44.484151579Z" level=info msg="shim disconnected" id=df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270 namespace=k8s.io Mar 4 01:01:44.485935 containerd[1466]: time="2026-03-04T01:01:44.484443300Z" level=warning msg="cleaning up after shim disconnected" id=df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270 namespace=k8s.io Mar 4 01:01:44.485935 containerd[1466]: time="2026-03-04T01:01:44.484463518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:01:44.539185 kubelet[2614]: I0304 01:01:44.539066 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:01:44.540601 kubelet[2614]: E0304 01:01:44.540138 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:44.542172 containerd[1466]: time="2026-03-04T01:01:44.542131845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 01:01:44.561955 kubelet[2614]: I0304 01:01:44.560708 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fd46b9b89-5dz8q" podStartSLOduration=2.69810459 podStartE2EDuration="4.560683712s" podCreationTimestamp="2026-03-04 01:01:40 +0000 UTC" firstStartedPulling="2026-03-04 01:01:41.184859228 +0000 UTC m=+24.466147155" lastFinishedPulling="2026-03-04 01:01:43.04743835 +0000 UTC m=+26.328726277" observedRunningTime="2026-03-04 01:01:43.556103524 +0000 UTC m=+26.837391451" watchObservedRunningTime="2026-03-04 01:01:44.560683712 +0000 UTC m=+27.841971659" Mar 4 01:01:44.927544 kubelet[2614]: E0304 01:01:44.927358 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:45.081009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df8ba7f68f150072dc0e81e2ceb1d75ffe087d927eace6b0488c06b039417270-rootfs.mount: Deactivated successfully. Mar 4 01:01:46.929329 kubelet[2614]: E0304 01:01:46.929122 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:48.927743 kubelet[2614]: E0304 01:01:48.926953 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:49.102623 kubelet[2614]: I0304 01:01:49.102540 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:01:49.103352 kubelet[2614]: E0304 01:01:49.103040 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:49.555748 kubelet[2614]: E0304 01:01:49.555431 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:50.475951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227320019.mount: Deactivated successfully. Mar 4 01:01:50.748447 containerd[1466]: time="2026-03-04T01:01:50.748156864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:50.749516 containerd[1466]: time="2026-03-04T01:01:50.749463024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 4 01:01:50.751349 containerd[1466]: time="2026-03-04T01:01:50.751141042Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:50.764666 containerd[1466]: time="2026-03-04T01:01:50.764516454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:50.766335 containerd[1466]: time="2026-03-04T01:01:50.766123567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.223943593s" Mar 4 01:01:50.766390 containerd[1466]: time="2026-03-04T01:01:50.766206610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 4 01:01:50.775438 containerd[1466]: time="2026-03-04T01:01:50.775371292Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 01:01:50.847079 containerd[1466]: time="2026-03-04T01:01:50.846901113Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f\"" Mar 4 01:01:50.848618 containerd[1466]: time="2026-03-04T01:01:50.848490954Z" level=info msg="StartContainer for \"4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f\"" Mar 4 01:01:50.927068 kubelet[2614]: E0304 01:01:50.926828 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:50.998607 systemd[1]: Started cri-containerd-4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f.scope - libcontainer container 4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f. Mar 4 01:01:51.132390 containerd[1466]: time="2026-03-04T01:01:51.129372756Z" level=info msg="StartContainer for \"4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f\" returns successfully" Mar 4 01:01:51.152935 systemd[1]: cri-containerd-4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f.scope: Deactivated successfully. Mar 4 01:01:51.228364 containerd[1466]: time="2026-03-04T01:01:51.228114442Z" level=info msg="shim disconnected" id=4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f namespace=k8s.io Mar 4 01:01:51.228364 containerd[1466]: time="2026-03-04T01:01:51.228192648Z" level=warning msg="cleaning up after shim disconnected" id=4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f namespace=k8s.io Mar 4 01:01:51.228364 containerd[1466]: time="2026-03-04T01:01:51.228204069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:01:51.477819 systemd[1]: run-containerd-runc-k8s.io-4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f-runc.PRD6i9.mount: Deactivated successfully. Mar 4 01:01:51.477998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4664d7887cf57f6f2e772649fbb149f758a2e6ab38a323052a819ffe462cc80f-rootfs.mount: Deactivated successfully. Mar 4 01:01:51.566335 containerd[1466]: time="2026-03-04T01:01:51.565986956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 01:01:52.935281 kubelet[2614]: E0304 01:01:52.935105 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:54.369137 containerd[1466]: time="2026-03-04T01:01:54.368957548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:54.370717 containerd[1466]: time="2026-03-04T01:01:54.370591383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 4 01:01:54.371976 containerd[1466]: time="2026-03-04T01:01:54.371843481Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:54.374743 containerd[1466]: time="2026-03-04T01:01:54.374650448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:54.376125 containerd[1466]: time="2026-03-04T01:01:54.376062718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.81002052s" Mar 4 01:01:54.376390 containerd[1466]: time="2026-03-04T01:01:54.376130584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 4 01:01:54.384579 containerd[1466]: time="2026-03-04T01:01:54.384396799Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 01:01:54.429617 containerd[1466]: time="2026-03-04T01:01:54.429494927Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b\"" Mar 4 01:01:54.430808 containerd[1466]: time="2026-03-04T01:01:54.430669555Z" level=info msg="StartContainer for \"14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b\"" Mar 4 01:01:54.490014 systemd[1]: Started cri-containerd-14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b.scope - libcontainer container 14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b. Mar 4 01:01:54.555195 containerd[1466]: time="2026-03-04T01:01:54.555044807Z" level=info msg="StartContainer for \"14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b\" returns successfully" Mar 4 01:01:54.930564 kubelet[2614]: E0304 01:01:54.930413 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:55.424145 systemd[1]: cri-containerd-14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b.scope: Deactivated successfully. Mar 4 01:01:55.424637 systemd[1]: cri-containerd-14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b.scope: Consumed 1.075s CPU time. Mar 4 01:01:55.468630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b-rootfs.mount: Deactivated successfully. Mar 4 01:01:55.497600 kubelet[2614]: I0304 01:01:55.496874 2614 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 4 01:01:55.629792 containerd[1466]: time="2026-03-04T01:01:55.629562445Z" level=info msg="shim disconnected" id=14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b namespace=k8s.io Mar 4 01:01:55.629792 containerd[1466]: time="2026-03-04T01:01:55.629674083Z" level=warning msg="cleaning up after shim disconnected" id=14397fe49f3c60036f879e0a826cf4808129d03ded8a52599cdbb2896984a47b namespace=k8s.io Mar 4 01:01:55.629792 containerd[1466]: time="2026-03-04T01:01:55.629691816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:01:55.659161 kubelet[2614]: I0304 01:01:55.654769 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-nginx-config\") pod \"whisker-5548dddc45-46nf7\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " pod="calico-system/whisker-5548dddc45-46nf7" Mar 4 01:01:55.659161 kubelet[2614]: I0304 01:01:55.654861 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-backend-key-pair\") pod \"whisker-5548dddc45-46nf7\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " pod="calico-system/whisker-5548dddc45-46nf7" Mar 4 01:01:55.659161 kubelet[2614]: I0304 01:01:55.654898 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84l8n\" (UniqueName: \"kubernetes.io/projected/47638dfc-43ad-4f79-9126-05bb92d9f07d-kube-api-access-84l8n\") pod \"whisker-5548dddc45-46nf7\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " pod="calico-system/whisker-5548dddc45-46nf7" Mar 4 01:01:55.659161 kubelet[2614]: I0304 01:01:55.654916 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/38597b56-fbbb-4af6-ab46-5447e9d3191f-calico-apiserver-certs\") pod \"calico-apiserver-5fd449cb54-f4dhr\" (UID: \"38597b56-fbbb-4af6-ab46-5447e9d3191f\") " pod="calico-system/calico-apiserver-5fd449cb54-f4dhr" Mar 4 01:01:55.659161 kubelet[2614]: I0304 01:01:55.654981 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw58n\" (UniqueName: \"kubernetes.io/projected/38597b56-fbbb-4af6-ab46-5447e9d3191f-kube-api-access-tw58n\") pod \"calico-apiserver-5fd449cb54-f4dhr\" (UID: \"38597b56-fbbb-4af6-ab46-5447e9d3191f\") " pod="calico-system/calico-apiserver-5fd449cb54-f4dhr" Mar 4 01:01:55.657458 systemd[1]: Created slice kubepods-besteffort-pod47638dfc_43ad_4f79_9126_05bb92d9f07d.slice - libcontainer container kubepods-besteffort-pod47638dfc_43ad_4f79_9126_05bb92d9f07d.slice. Mar 4 01:01:55.660084 kubelet[2614]: I0304 01:01:55.655015 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-ca-bundle\") pod \"whisker-5548dddc45-46nf7\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " pod="calico-system/whisker-5548dddc45-46nf7" Mar 4 01:01:55.675424 systemd[1]: Created slice kubepods-besteffort-pod38597b56_fbbb_4af6_ab46_5447e9d3191f.slice - libcontainer container kubepods-besteffort-pod38597b56_fbbb_4af6_ab46_5447e9d3191f.slice. Mar 4 01:01:55.700372 systemd[1]: Created slice kubepods-besteffort-podf2d78d17_4768_48c2_ae26_3e7f45451d5a.slice - libcontainer container kubepods-besteffort-podf2d78d17_4768_48c2_ae26_3e7f45451d5a.slice. Mar 4 01:01:55.714593 systemd[1]: Created slice kubepods-besteffort-pod3d0380e4_587c_4361_a5b6_a8c814a6baf0.slice - libcontainer container kubepods-besteffort-pod3d0380e4_587c_4361_a5b6_a8c814a6baf0.slice. Mar 4 01:01:55.724940 systemd[1]: Created slice kubepods-besteffort-pod17b10246_700e_4e39_9a06_ab5fa1ad9082.slice - libcontainer container kubepods-besteffort-pod17b10246_700e_4e39_9a06_ab5fa1ad9082.slice. Mar 4 01:01:55.738755 systemd[1]: Created slice kubepods-burstable-pod6eeda1a6_1f9b_42a9_8645_346f1f25f12e.slice - libcontainer container kubepods-burstable-pod6eeda1a6_1f9b_42a9_8645_346f1f25f12e.slice. Mar 4 01:01:55.748695 systemd[1]: Created slice kubepods-burstable-pod968ce120_7ba4_48cc_a851_6001d23f80bd.slice - libcontainer container kubepods-burstable-pod968ce120_7ba4_48cc_a851_6001d23f80bd.slice. Mar 4 01:01:55.756001 kubelet[2614]: I0304 01:01:55.755894 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzlbs\" (UniqueName: \"kubernetes.io/projected/3d0380e4-587c-4361-a5b6-a8c814a6baf0-kube-api-access-vzlbs\") pod \"goldmane-5b85766d88-p2psc\" (UID: \"3d0380e4-587c-4361-a5b6-a8c814a6baf0\") " pod="calico-system/goldmane-5b85766d88-p2psc" Mar 4 01:01:55.756001 kubelet[2614]: I0304 01:01:55.755988 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17b10246-700e-4e39-9a06-ab5fa1ad9082-tigera-ca-bundle\") pod \"calico-kube-controllers-56874799f8-qqwn4\" (UID: \"17b10246-700e-4e39-9a06-ab5fa1ad9082\") " pod="calico-system/calico-kube-controllers-56874799f8-qqwn4" Mar 4 01:01:55.756001 kubelet[2614]: I0304 01:01:55.756007 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d0380e4-587c-4361-a5b6-a8c814a6baf0-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-p2psc\" (UID: \"3d0380e4-587c-4361-a5b6-a8c814a6baf0\") " pod="calico-system/goldmane-5b85766d88-p2psc" Mar 4 01:01:55.756372 kubelet[2614]: I0304 01:01:55.756056 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhch\" (UniqueName: \"kubernetes.io/projected/17b10246-700e-4e39-9a06-ab5fa1ad9082-kube-api-access-xjhch\") pod \"calico-kube-controllers-56874799f8-qqwn4\" (UID: \"17b10246-700e-4e39-9a06-ab5fa1ad9082\") " pod="calico-system/calico-kube-controllers-56874799f8-qqwn4" Mar 4 01:01:55.756372 kubelet[2614]: I0304 01:01:55.756076 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0380e4-587c-4361-a5b6-a8c814a6baf0-config\") pod \"goldmane-5b85766d88-p2psc\" (UID: \"3d0380e4-587c-4361-a5b6-a8c814a6baf0\") " pod="calico-system/goldmane-5b85766d88-p2psc" Mar 4 01:01:55.756372 kubelet[2614]: I0304 01:01:55.756096 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2d78d17-4768-48c2-ae26-3e7f45451d5a-calico-apiserver-certs\") pod \"calico-apiserver-5fd449cb54-47wlk\" (UID: \"f2d78d17-4768-48c2-ae26-3e7f45451d5a\") " pod="calico-system/calico-apiserver-5fd449cb54-47wlk" Mar 4 01:01:55.756372 kubelet[2614]: I0304 01:01:55.756115 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcddt\" (UniqueName: \"kubernetes.io/projected/968ce120-7ba4-48cc-a851-6001d23f80bd-kube-api-access-vcddt\") pod \"coredns-674b8bbfcf-6wjjp\" (UID: \"968ce120-7ba4-48cc-a851-6001d23f80bd\") " pod="kube-system/coredns-674b8bbfcf-6wjjp" Mar 4 01:01:55.756372 kubelet[2614]: I0304 01:01:55.756161 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeda1a6-1f9b-42a9-8645-346f1f25f12e-config-volume\") pod \"coredns-674b8bbfcf-jqk6s\" (UID: \"6eeda1a6-1f9b-42a9-8645-346f1f25f12e\") " pod="kube-system/coredns-674b8bbfcf-jqk6s" Mar 4 01:01:55.756557 kubelet[2614]: I0304 01:01:55.756189 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844mx\" (UniqueName: \"kubernetes.io/projected/f2d78d17-4768-48c2-ae26-3e7f45451d5a-kube-api-access-844mx\") pod \"calico-apiserver-5fd449cb54-47wlk\" (UID: \"f2d78d17-4768-48c2-ae26-3e7f45451d5a\") " pod="calico-system/calico-apiserver-5fd449cb54-47wlk" Mar 4 01:01:55.756557 kubelet[2614]: I0304 01:01:55.756205 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/968ce120-7ba4-48cc-a851-6001d23f80bd-config-volume\") pod \"coredns-674b8bbfcf-6wjjp\" (UID: \"968ce120-7ba4-48cc-a851-6001d23f80bd\") " pod="kube-system/coredns-674b8bbfcf-6wjjp" Mar 4 01:01:55.756557 kubelet[2614]: I0304 01:01:55.756357 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lckl5\" (UniqueName: \"kubernetes.io/projected/6eeda1a6-1f9b-42a9-8645-346f1f25f12e-kube-api-access-lckl5\") pod \"coredns-674b8bbfcf-jqk6s\" (UID: \"6eeda1a6-1f9b-42a9-8645-346f1f25f12e\") " pod="kube-system/coredns-674b8bbfcf-jqk6s" Mar 4 01:01:55.756557 kubelet[2614]: I0304 01:01:55.756379 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3d0380e4-587c-4361-a5b6-a8c814a6baf0-goldmane-key-pair\") pod \"goldmane-5b85766d88-p2psc\" (UID: \"3d0380e4-587c-4361-a5b6-a8c814a6baf0\") " pod="calico-system/goldmane-5b85766d88-p2psc" Mar 4 01:01:55.975123 containerd[1466]: time="2026-03-04T01:01:55.974623197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5548dddc45-46nf7,Uid:47638dfc-43ad-4f79-9126-05bb92d9f07d,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:55.992839 containerd[1466]: time="2026-03-04T01:01:55.992432184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-f4dhr,Uid:38597b56-fbbb-4af6-ab46-5447e9d3191f,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:56.010389 containerd[1466]: time="2026-03-04T01:01:56.010176055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-47wlk,Uid:f2d78d17-4768-48c2-ae26-3e7f45451d5a,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:56.021556 containerd[1466]: time="2026-03-04T01:01:56.021458781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-p2psc,Uid:3d0380e4-587c-4361-a5b6-a8c814a6baf0,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:56.036082 containerd[1466]: time="2026-03-04T01:01:56.035732167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56874799f8-qqwn4,Uid:17b10246-700e-4e39-9a06-ab5fa1ad9082,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:56.043635 kubelet[2614]: E0304 01:01:56.043556 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:56.046735 containerd[1466]: time="2026-03-04T01:01:56.046645368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqk6s,Uid:6eeda1a6-1f9b-42a9-8645-346f1f25f12e,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:56.054526 kubelet[2614]: E0304 01:01:56.054091 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:56.056699 containerd[1466]: time="2026-03-04T01:01:56.056584786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wjjp,Uid:968ce120-7ba4-48cc-a851-6001d23f80bd,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:56.307776 containerd[1466]: time="2026-03-04T01:01:56.307434661Z" level=error msg="Failed to destroy network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.314050 containerd[1466]: time="2026-03-04T01:01:56.313810164Z" level=error msg="Failed to destroy network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.315447 containerd[1466]: time="2026-03-04T01:01:56.315415908Z" level=error msg="Failed to destroy network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.317351 containerd[1466]: time="2026-03-04T01:01:56.316952896Z" level=error msg="encountered an error cleaning up failed sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.317351 containerd[1466]: time="2026-03-04T01:01:56.317019721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5548dddc45-46nf7,Uid:47638dfc-43ad-4f79-9126-05bb92d9f07d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.318034 containerd[1466]: time="2026-03-04T01:01:56.318006092Z" level=error msg="encountered an error cleaning up failed sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.318166 containerd[1466]: time="2026-03-04T01:01:56.318141464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-47wlk,Uid:f2d78d17-4768-48c2-ae26-3e7f45451d5a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.330541 containerd[1466]: time="2026-03-04T01:01:56.330490569Z" level=error msg="Failed to destroy network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.331997 containerd[1466]: time="2026-03-04T01:01:56.331958177Z" level=error msg="encountered an error cleaning up failed sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.336165 kubelet[2614]: E0304 01:01:56.335949 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.336165 kubelet[2614]: E0304 01:01:56.336052 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5548dddc45-46nf7" Mar 4 01:01:56.336165 kubelet[2614]: E0304 01:01:56.335965 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.336848 kubelet[2614]: E0304 01:01:56.336181 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fd449cb54-47wlk" Mar 4 01:01:56.336848 kubelet[2614]: E0304 01:01:56.336205 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fd449cb54-47wlk" Mar 4 01:01:56.336848 kubelet[2614]: E0304 01:01:56.336132 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5548dddc45-46nf7" Mar 4 01:01:56.336963 containerd[1466]: time="2026-03-04T01:01:56.336392792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wjjp,Uid:968ce120-7ba4-48cc-a851-6001d23f80bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.336963 containerd[1466]: time="2026-03-04T01:01:56.333645595Z" level=error msg="Failed to destroy network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.336963 containerd[1466]: time="2026-03-04T01:01:56.333686831Z" level=error msg="Failed to destroy network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.337151 kubelet[2614]: E0304 01:01:56.336440 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fd449cb54-47wlk_calico-system(f2d78d17-4768-48c2-ae26-3e7f45451d5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fd449cb54-47wlk_calico-system(f2d78d17-4768-48c2-ae26-3e7f45451d5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fd449cb54-47wlk" podUID="f2d78d17-4768-48c2-ae26-3e7f45451d5a" Mar 4 01:01:56.337151 kubelet[2614]: E0304 01:01:56.336519 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5548dddc45-46nf7_calico-system(47638dfc-43ad-4f79-9126-05bb92d9f07d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5548dddc45-46nf7_calico-system(47638dfc-43ad-4f79-9126-05bb92d9f07d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5548dddc45-46nf7" podUID="47638dfc-43ad-4f79-9126-05bb92d9f07d" Mar 4 01:01:56.337151 kubelet[2614]: E0304 01:01:56.336770 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.338629 containerd[1466]: time="2026-03-04T01:01:56.337067506Z" level=error msg="encountered an error cleaning up failed sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.338629 containerd[1466]: time="2026-03-04T01:01:56.337102682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56874799f8-qqwn4,Uid:17b10246-700e-4e39-9a06-ab5fa1ad9082,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.338629 containerd[1466]: time="2026-03-04T01:01:56.335104412Z" level=error msg="encountered an error cleaning up failed sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.338629 containerd[1466]: time="2026-03-04T01:01:56.337159398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-f4dhr,Uid:38597b56-fbbb-4af6-ab46-5447e9d3191f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.338629 containerd[1466]: time="2026-03-04T01:01:56.337630586Z" level=error msg="encountered an error cleaning up failed sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.338629 containerd[1466]: time="2026-03-04T01:01:56.337674869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-p2psc,Uid:3d0380e4-587c-4361-a5b6-a8c814a6baf0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.339032 kubelet[2614]: E0304 01:01:56.336794 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6wjjp" Mar 4 01:01:56.339032 kubelet[2614]: E0304 01:01:56.336810 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6wjjp" Mar 4 01:01:56.339032 kubelet[2614]: E0304 01:01:56.336836 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6wjjp_kube-system(968ce120-7ba4-48cc-a851-6001d23f80bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6wjjp_kube-system(968ce120-7ba4-48cc-a851-6001d23f80bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6wjjp" podUID="968ce120-7ba4-48cc-a851-6001d23f80bd" Mar 4 01:01:56.339336 kubelet[2614]: E0304 01:01:56.337874 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.339336 kubelet[2614]: E0304 01:01:56.337902 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-p2psc" Mar 4 01:01:56.339336 kubelet[2614]: E0304 01:01:56.337917 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-p2psc" Mar 4 01:01:56.339474 kubelet[2614]: E0304 01:01:56.337945 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-p2psc_calico-system(3d0380e4-587c-4361-a5b6-a8c814a6baf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-p2psc_calico-system(3d0380e4-587c-4361-a5b6-a8c814a6baf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-p2psc" podUID="3d0380e4-587c-4361-a5b6-a8c814a6baf0" Mar 4 01:01:56.339474 kubelet[2614]: E0304 01:01:56.337979 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.339474 kubelet[2614]: E0304 01:01:56.337995 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56874799f8-qqwn4" Mar 4 01:01:56.339871 kubelet[2614]: E0304 01:01:56.338007 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56874799f8-qqwn4" Mar 4 01:01:56.339871 kubelet[2614]: E0304 01:01:56.338033 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56874799f8-qqwn4_calico-system(17b10246-700e-4e39-9a06-ab5fa1ad9082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56874799f8-qqwn4_calico-system(17b10246-700e-4e39-9a06-ab5fa1ad9082)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56874799f8-qqwn4" podUID="17b10246-700e-4e39-9a06-ab5fa1ad9082" Mar 4 01:01:56.339871 kubelet[2614]: E0304 01:01:56.338059 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.340058 kubelet[2614]: E0304 01:01:56.338083 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fd449cb54-f4dhr" Mar 4 01:01:56.340058 kubelet[2614]: E0304 01:01:56.338094 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fd449cb54-f4dhr" Mar 4 01:01:56.340058 kubelet[2614]: E0304 01:01:56.338382 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fd449cb54-f4dhr_calico-system(38597b56-fbbb-4af6-ab46-5447e9d3191f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fd449cb54-f4dhr_calico-system(38597b56-fbbb-4af6-ab46-5447e9d3191f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fd449cb54-f4dhr" podUID="38597b56-fbbb-4af6-ab46-5447e9d3191f" Mar 4 01:01:56.344081 containerd[1466]: time="2026-03-04T01:01:56.344049840Z" level=error msg="Failed to destroy network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.344860 containerd[1466]: time="2026-03-04T01:01:56.344789726Z" level=error msg="encountered an error cleaning up failed sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.344919 containerd[1466]: time="2026-03-04T01:01:56.344871358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqk6s,Uid:6eeda1a6-1f9b-42a9-8645-346f1f25f12e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.345405 kubelet[2614]: E0304 01:01:56.345349 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.345463 kubelet[2614]: E0304 01:01:56.345413 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqk6s" Mar 4 01:01:56.345463 kubelet[2614]: E0304 01:01:56.345431 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqk6s" Mar 4 01:01:56.345527 kubelet[2614]: E0304 01:01:56.345462 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jqk6s_kube-system(6eeda1a6-1f9b-42a9-8645-346f1f25f12e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jqk6s_kube-system(6eeda1a6-1f9b-42a9-8645-346f1f25f12e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jqk6s" podUID="6eeda1a6-1f9b-42a9-8645-346f1f25f12e" Mar 4 01:01:56.600684 kubelet[2614]: I0304 01:01:56.600500 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:01:56.602583 kubelet[2614]: I0304 01:01:56.602146 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:01:56.604789 kubelet[2614]: I0304 01:01:56.604758 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:01:56.606453 kubelet[2614]: I0304 01:01:56.606429 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:01:56.619556 kubelet[2614]: I0304 01:01:56.619457 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:01:56.629335 containerd[1466]: time="2026-03-04T01:01:56.629190560Z" level=info msg="StopPodSandbox for \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\"" Mar 4 01:01:56.632341 containerd[1466]: time="2026-03-04T01:01:56.629969291Z" level=info msg="StopPodSandbox for \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\"" Mar 4 01:01:56.632341 containerd[1466]: time="2026-03-04T01:01:56.631129876Z" level=info msg="StopPodSandbox for \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\"" Mar 4 01:01:56.632938 containerd[1466]: time="2026-03-04T01:01:56.632780123Z" level=info msg="Ensure that sandbox f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4 in task-service has been cleanup successfully" Mar 4 01:01:56.633003 containerd[1466]: time="2026-03-04T01:01:56.632945901Z" level=info msg="StopPodSandbox for \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\"" Mar 4 01:01:56.633181 containerd[1466]: time="2026-03-04T01:01:56.633076374Z" level=info msg="Ensure that sandbox 97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8 in task-service has been cleanup successfully" Mar 4 01:01:56.633181 containerd[1466]: time="2026-03-04T01:01:56.633089662Z" level=info msg="Ensure that sandbox 724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48 in task-service has been cleanup successfully" Mar 4 01:01:56.633181 containerd[1466]: time="2026-03-04T01:01:56.633111430Z" level=info msg="Ensure that sandbox 7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0 in task-service has been cleanup successfully" Mar 4 01:01:56.642601 containerd[1466]: time="2026-03-04T01:01:56.641927559Z" level=info msg="StopPodSandbox for \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\"" Mar 4 01:01:56.645133 kubelet[2614]: I0304 01:01:56.644546 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:01:56.646599 containerd[1466]: time="2026-03-04T01:01:56.646568077Z" level=info msg="StopPodSandbox for \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\"" Mar 4 01:01:56.648935 containerd[1466]: time="2026-03-04T01:01:56.648913327Z" level=info msg="Ensure that sandbox 3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229 in task-service has been cleanup successfully" Mar 4 01:01:56.654397 kubelet[2614]: I0304 01:01:56.654366 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:01:56.659143 containerd[1466]: time="2026-03-04T01:01:56.658588897Z" level=info msg="StopPodSandbox for \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\"" Mar 4 01:01:56.659143 containerd[1466]: time="2026-03-04T01:01:56.658770926Z" level=info msg="Ensure that sandbox bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b in task-service has been cleanup successfully" Mar 4 01:01:56.659636 containerd[1466]: time="2026-03-04T01:01:56.659113903Z" level=info msg="Ensure that sandbox 2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917 in task-service has been cleanup successfully" Mar 4 01:01:56.725113 containerd[1466]: time="2026-03-04T01:01:56.724995964Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 01:01:56.765440 containerd[1466]: time="2026-03-04T01:01:56.765392261Z" level=error msg="StopPodSandbox for \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\" failed" error="failed to destroy network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.767506 kubelet[2614]: E0304 01:01:56.767462 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:01:56.767593 kubelet[2614]: E0304 01:01:56.767531 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4"} Mar 4 01:01:56.767593 kubelet[2614]: E0304 01:01:56.767585 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d0380e4-587c-4361-a5b6-a8c814a6baf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:01:56.767790 kubelet[2614]: E0304 01:01:56.767610 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d0380e4-587c-4361-a5b6-a8c814a6baf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-p2psc" podUID="3d0380e4-587c-4361-a5b6-a8c814a6baf0" Mar 4 01:01:56.769027 containerd[1466]: time="2026-03-04T01:01:56.768911442Z" level=error msg="StopPodSandbox for \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\" failed" error="failed to destroy network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.769143 kubelet[2614]: E0304 01:01:56.769089 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:01:56.769143 kubelet[2614]: E0304 01:01:56.769116 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0"} Mar 4 01:01:56.769143 kubelet[2614]: E0304 01:01:56.769138 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2d78d17-4768-48c2-ae26-3e7f45451d5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:01:56.769446 kubelet[2614]: E0304 01:01:56.769156 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2d78d17-4768-48c2-ae26-3e7f45451d5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fd449cb54-47wlk" podUID="f2d78d17-4768-48c2-ae26-3e7f45451d5a" Mar 4 01:01:56.770193 containerd[1466]: time="2026-03-04T01:01:56.770116627Z" level=error msg="StopPodSandbox for \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\" failed" error="failed to destroy network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.770966 kubelet[2614]: E0304 01:01:56.770588 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:01:56.770966 kubelet[2614]: E0304 01:01:56.770616 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8"} Mar 4 01:01:56.770966 kubelet[2614]: E0304 01:01:56.770636 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17b10246-700e-4e39-9a06-ab5fa1ad9082\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:01:56.770966 kubelet[2614]: E0304 01:01:56.770721 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17b10246-700e-4e39-9a06-ab5fa1ad9082\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56874799f8-qqwn4" podUID="17b10246-700e-4e39-9a06-ab5fa1ad9082" Mar 4 01:01:56.774144 containerd[1466]: time="2026-03-04T01:01:56.774114991Z" level=error msg="StopPodSandbox for \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\" failed" error="failed to destroy network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.775047 kubelet[2614]: E0304 01:01:56.775020 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:01:56.775684 kubelet[2614]: E0304 01:01:56.775563 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48"} Mar 4 01:01:56.775837 kubelet[2614]: E0304 01:01:56.775819 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"968ce120-7ba4-48cc-a851-6001d23f80bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:01:56.776118 kubelet[2614]: E0304 01:01:56.776017 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"968ce120-7ba4-48cc-a851-6001d23f80bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6wjjp" podUID="968ce120-7ba4-48cc-a851-6001d23f80bd" Mar 4 01:01:56.777315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747206208.mount: Deactivated successfully. Mar 4 01:01:56.785004 containerd[1466]: time="2026-03-04T01:01:56.784757734Z" level=error msg="StopPodSandbox for \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\" failed" error="failed to destroy network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.785107 kubelet[2614]: E0304 01:01:56.784976 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:01:56.785107 kubelet[2614]: E0304 01:01:56.785016 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917"} Mar 4 01:01:56.785107 kubelet[2614]: E0304 01:01:56.785045 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6eeda1a6-1f9b-42a9-8645-346f1f25f12e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:01:56.785557 kubelet[2614]: E0304 01:01:56.785070 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6eeda1a6-1f9b-42a9-8645-346f1f25f12e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jqk6s" podUID="6eeda1a6-1f9b-42a9-8645-346f1f25f12e" Mar 4 01:01:56.790488 containerd[1466]: time="2026-03-04T01:01:56.790450910Z" level=error msg="StopPodSandbox for \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\" failed" error="failed to destroy network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.791699 kubelet[2614]: E0304 01:01:56.791009 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:01:56.791699 kubelet[2614]: E0304 01:01:56.791116 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229"} Mar 4 01:01:56.791699 kubelet[2614]: E0304 01:01:56.791149 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38597b56-fbbb-4af6-ab46-5447e9d3191f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:01:56.791699 kubelet[2614]: E0304 01:01:56.791173 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38597b56-fbbb-4af6-ab46-5447e9d3191f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fd449cb54-f4dhr" podUID="38597b56-fbbb-4af6-ab46-5447e9d3191f" Mar 4 01:01:56.792136 containerd[1466]: time="2026-03-04T01:01:56.791965997Z" level=info msg="CreateContainer within sandbox \"021acc2b537dd666d29a888af0c4668752ffcd46dbcd3f5594902ee5fa74a2bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"823cac8ed40b0007ae5fef2e78c858d68582c3aaa08ad2d08171618e014753ac\"" Mar 4 01:01:56.794706 containerd[1466]: time="2026-03-04T01:01:56.793040510Z" level=info msg="StartContainer for \"823cac8ed40b0007ae5fef2e78c858d68582c3aaa08ad2d08171618e014753ac\"" Mar 4 01:01:56.819905 containerd[1466]: time="2026-03-04T01:01:56.819806205Z" level=error msg="StopPodSandbox for \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\" failed" error="failed to destroy network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:56.820194 kubelet[2614]: E0304 01:01:56.820107 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:01:56.820378 kubelet[2614]: E0304 01:01:56.820192 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b"} Mar 4 01:01:56.820417 kubelet[2614]: E0304 01:01:56.820387 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47638dfc-43ad-4f79-9126-05bb92d9f07d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:01:56.820633 kubelet[2614]: E0304 01:01:56.820419 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47638dfc-43ad-4f79-9126-05bb92d9f07d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5548dddc45-46nf7" podUID="47638dfc-43ad-4f79-9126-05bb92d9f07d" Mar 4 01:01:56.846550 systemd[1]: Started cri-containerd-823cac8ed40b0007ae5fef2e78c858d68582c3aaa08ad2d08171618e014753ac.scope - libcontainer container 823cac8ed40b0007ae5fef2e78c858d68582c3aaa08ad2d08171618e014753ac. Mar 4 01:01:56.919571 containerd[1466]: time="2026-03-04T01:01:56.919138493Z" level=info msg="StartContainer for \"823cac8ed40b0007ae5fef2e78c858d68582c3aaa08ad2d08171618e014753ac\" returns successfully" Mar 4 01:01:56.937404 systemd[1]: Created slice kubepods-besteffort-pode5349a34_0a7e_48e8_966b_ab286041115e.slice - libcontainer container kubepods-besteffort-pode5349a34_0a7e_48e8_966b_ab286041115e.slice. Mar 4 01:01:56.942554 containerd[1466]: time="2026-03-04T01:01:56.942452323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lkt8,Uid:e5349a34-0a7e-48e8-966b-ab286041115e,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:57.031497 containerd[1466]: time="2026-03-04T01:01:57.031321827Z" level=error msg="Failed to destroy network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:57.031946 containerd[1466]: time="2026-03-04T01:01:57.031862191Z" level=error msg="encountered an error cleaning up failed sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:57.032018 containerd[1466]: time="2026-03-04T01:01:57.031959162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lkt8,Uid:e5349a34-0a7e-48e8-966b-ab286041115e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:57.032987 kubelet[2614]: E0304 01:01:57.032427 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:01:57.032987 kubelet[2614]: E0304 01:01:57.032510 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2lkt8" Mar 4 01:01:57.032987 kubelet[2614]: E0304 01:01:57.032541 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2lkt8" Mar 4 01:01:57.033361 kubelet[2614]: E0304 01:01:57.032611 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2lkt8_calico-system(e5349a34-0a7e-48e8-966b-ab286041115e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2lkt8_calico-system(e5349a34-0a7e-48e8-966b-ab286041115e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2lkt8" podUID="e5349a34-0a7e-48e8-966b-ab286041115e" Mar 4 01:01:57.670037 kubelet[2614]: I0304 01:01:57.669973 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:01:57.672866 containerd[1466]: time="2026-03-04T01:01:57.671022667Z" level=info msg="StopPodSandbox for \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\"" Mar 4 01:01:57.672866 containerd[1466]: time="2026-03-04T01:01:57.671356679Z" level=info msg="Ensure that sandbox 4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727 in task-service has been cleanup successfully" Mar 4 01:01:57.675650 containerd[1466]: time="2026-03-04T01:01:57.675140509Z" level=info msg="StopPodSandbox for \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\"" Mar 4 01:01:57.768192 kubelet[2614]: I0304 01:01:57.768074 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ss68l" podStartSLOduration=4.652879497 podStartE2EDuration="17.768044461s" podCreationTimestamp="2026-03-04 01:01:40 +0000 UTC" firstStartedPulling="2026-03-04 01:01:41.26265195 +0000 UTC m=+24.543939888" lastFinishedPulling="2026-03-04 01:01:54.377816925 +0000 UTC m=+37.659104852" observedRunningTime="2026-03-04 01:01:57.716053552 +0000 UTC m=+40.997341498" watchObservedRunningTime="2026-03-04 01:01:57.768044461 +0000 UTC m=+41.049332418" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.772 [INFO][3935] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.772 [INFO][3935] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" iface="eth0" netns="/var/run/netns/cni-6bad56bc-d904-0753-1641-84da82cf73b3" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.773 [INFO][3935] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" iface="eth0" netns="/var/run/netns/cni-6bad56bc-d904-0753-1641-84da82cf73b3" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.773 [INFO][3935] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" iface="eth0" netns="/var/run/netns/cni-6bad56bc-d904-0753-1641-84da82cf73b3" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.773 [INFO][3935] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.773 [INFO][3935] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.823 [INFO][3952] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.823 [INFO][3952] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.823 [INFO][3952] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.844 [WARNING][3952] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.844 [INFO][3952] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.852 [INFO][3952] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:01:57.860844 containerd[1466]: 2026-03-04 01:01:57.858 [INFO][3935] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:01:57.863639 containerd[1466]: time="2026-03-04T01:01:57.863552931Z" level=info msg="TearDown network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\" successfully" Mar 4 01:01:57.863639 containerd[1466]: time="2026-03-04T01:01:57.863623141Z" level=info msg="StopPodSandbox for \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\" returns successfully" Mar 4 01:01:57.865582 systemd[1]: run-netns-cni\x2d6bad56bc\x2dd904\x2d0753\x2d1641\x2d84da82cf73b3.mount: Deactivated successfully. Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.767 [INFO][3936] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.769 [INFO][3936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" iface="eth0" netns="/var/run/netns/cni-04474525-7306-4cbc-d541-e24c950356a0" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.770 [INFO][3936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" iface="eth0" netns="/var/run/netns/cni-04474525-7306-4cbc-d541-e24c950356a0" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.771 [INFO][3936] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" iface="eth0" netns="/var/run/netns/cni-04474525-7306-4cbc-d541-e24c950356a0" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.771 [INFO][3936] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.771 [INFO][3936] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.827 [INFO][3950] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.827 [INFO][3950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.852 [INFO][3950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.864 [WARNING][3950] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.864 [INFO][3950] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.867 [INFO][3950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:01:57.872968 containerd[1466]: 2026-03-04 01:01:57.869 [INFO][3936] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:01:57.876367 containerd[1466]: time="2026-03-04T01:01:57.873622799Z" level=info msg="TearDown network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\" successfully" Mar 4 01:01:57.876367 containerd[1466]: time="2026-03-04T01:01:57.873655881Z" level=info msg="StopPodSandbox for \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\" returns successfully" Mar 4 01:01:57.876367 containerd[1466]: time="2026-03-04T01:01:57.874805838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lkt8,Uid:e5349a34-0a7e-48e8-966b-ab286041115e,Namespace:calico-system,Attempt:1,}" Mar 4 01:01:57.876130 systemd[1]: run-netns-cni\x2d04474525\x2d7306\x2d4cbc\x2dd541\x2de24c950356a0.mount: Deactivated successfully. Mar 4 01:01:57.983759 kubelet[2614]: I0304 01:01:57.983536 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-ca-bundle\") pod \"47638dfc-43ad-4f79-9126-05bb92d9f07d\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " Mar 4 01:01:57.983759 kubelet[2614]: I0304 01:01:57.983609 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84l8n\" (UniqueName: \"kubernetes.io/projected/47638dfc-43ad-4f79-9126-05bb92d9f07d-kube-api-access-84l8n\") pod \"47638dfc-43ad-4f79-9126-05bb92d9f07d\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " Mar 4 01:01:57.983759 kubelet[2614]: I0304 01:01:57.983648 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-nginx-config\") pod \"47638dfc-43ad-4f79-9126-05bb92d9f07d\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " Mar 4 01:01:57.983759 kubelet[2614]: I0304 01:01:57.983680 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-backend-key-pair\") pod \"47638dfc-43ad-4f79-9126-05bb92d9f07d\" (UID: \"47638dfc-43ad-4f79-9126-05bb92d9f07d\") " Mar 4 01:01:57.985002 kubelet[2614]: I0304 01:01:57.984857 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "47638dfc-43ad-4f79-9126-05bb92d9f07d" (UID: "47638dfc-43ad-4f79-9126-05bb92d9f07d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:01:57.985443 kubelet[2614]: I0304 01:01:57.985413 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "47638dfc-43ad-4f79-9126-05bb92d9f07d" (UID: "47638dfc-43ad-4f79-9126-05bb92d9f07d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:01:58.008655 kubelet[2614]: I0304 01:01:58.008562 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47638dfc-43ad-4f79-9126-05bb92d9f07d-kube-api-access-84l8n" (OuterVolumeSpecName: "kube-api-access-84l8n") pod "47638dfc-43ad-4f79-9126-05bb92d9f07d" (UID: "47638dfc-43ad-4f79-9126-05bb92d9f07d"). InnerVolumeSpecName "kube-api-access-84l8n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:01:58.009411 kubelet[2614]: I0304 01:01:58.009357 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "47638dfc-43ad-4f79-9126-05bb92d9f07d" (UID: "47638dfc-43ad-4f79-9126-05bb92d9f07d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:01:58.085348 kubelet[2614]: I0304 01:01:58.085086 2614 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 4 01:01:58.085348 kubelet[2614]: I0304 01:01:58.085168 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 4 01:01:58.085348 kubelet[2614]: I0304 01:01:58.085184 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47638dfc-43ad-4f79-9126-05bb92d9f07d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 4 01:01:58.085348 kubelet[2614]: I0304 01:01:58.085194 2614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-84l8n\" (UniqueName: \"kubernetes.io/projected/47638dfc-43ad-4f79-9126-05bb92d9f07d-kube-api-access-84l8n\") on node \"localhost\" DevicePath \"\"" Mar 4 01:01:58.104409 systemd-networkd[1404]: cali1682198fdd6: Link UP Mar 4 01:01:58.108537 systemd-networkd[1404]: cali1682198fdd6: Gained carrier Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:57.945 [ERROR][3967] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:57.964 [INFO][3967] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2lkt8-eth0 csi-node-driver- calico-system e5349a34-0a7e-48e8-966b-ab286041115e 963 0 2026-03-04 01:01:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2lkt8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1682198fdd6 [] [] }} ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:57.964 [INFO][3967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.024 [INFO][3982] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" HandleID="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.033 [INFO][3982] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" HandleID="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2lkt8", "timestamp":"2026-03-04 01:01:58.024006153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00054ef20)} Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.033 [INFO][3982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.033 [INFO][3982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.033 [INFO][3982] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.038 [INFO][3982] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.044 [INFO][3982] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.051 [INFO][3982] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.054 [INFO][3982] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.057 [INFO][3982] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.057 [INFO][3982] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.059 [INFO][3982] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620 Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.064 [INFO][3982] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.073 [INFO][3982] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.073 [INFO][3982] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" host="localhost" Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.073 [INFO][3982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:01:58.135855 containerd[1466]: 2026-03-04 01:01:58.073 [INFO][3982] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" HandleID="k8s-pod-network.1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:58.136865 containerd[1466]: 2026-03-04 01:01:58.077 [INFO][3967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lkt8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5349a34-0a7e-48e8-966b-ab286041115e", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2lkt8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1682198fdd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:01:58.136865 containerd[1466]: 2026-03-04 01:01:58.077 [INFO][3967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:58.136865 containerd[1466]: 2026-03-04 01:01:58.077 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1682198fdd6 ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:58.136865 containerd[1466]: 2026-03-04 01:01:58.107 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:58.136865 containerd[1466]: 2026-03-04 01:01:58.109 [INFO][3967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lkt8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5349a34-0a7e-48e8-966b-ab286041115e", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620", Pod:"csi-node-driver-2lkt8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1682198fdd6", MAC:"86:a5:f9:5a:69:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:01:58.136865 containerd[1466]: 2026-03-04 01:01:58.130 [INFO][3967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620" Namespace="calico-system" Pod="csi-node-driver-2lkt8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:01:58.167649 containerd[1466]: time="2026-03-04T01:01:58.167398199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:58.167649 containerd[1466]: time="2026-03-04T01:01:58.167495800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:58.167649 containerd[1466]: time="2026-03-04T01:01:58.167512402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:58.167886 containerd[1466]: time="2026-03-04T01:01:58.167661298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:58.207614 systemd[1]: Started cri-containerd-1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620.scope - libcontainer container 1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620. Mar 4 01:01:58.228093 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:01:58.248666 containerd[1466]: time="2026-03-04T01:01:58.248427015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lkt8,Uid:e5349a34-0a7e-48e8-966b-ab286041115e,Namespace:calico-system,Attempt:1,} returns sandbox id \"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620\"" Mar 4 01:01:58.250996 containerd[1466]: time="2026-03-04T01:01:58.250744098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 01:01:58.468096 systemd[1]: var-lib-kubelet-pods-47638dfc\x2d43ad\x2d4f79\x2d9126\x2d05bb92d9f07d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d84l8n.mount: Deactivated successfully. Mar 4 01:01:58.468415 systemd[1]: var-lib-kubelet-pods-47638dfc\x2d43ad\x2d4f79\x2d9126\x2d05bb92d9f07d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 01:01:58.716034 systemd[1]: Removed slice kubepods-besteffort-pod47638dfc_43ad_4f79_9126_05bb92d9f07d.slice - libcontainer container kubepods-besteffort-pod47638dfc_43ad_4f79_9126_05bb92d9f07d.slice. Mar 4 01:01:58.837341 systemd[1]: Created slice kubepods-besteffort-pod647e7528_0953_4a64_bfd7_3fcac75551d3.slice - libcontainer container kubepods-besteffort-pod647e7528_0953_4a64_bfd7_3fcac75551d3.slice. Mar 4 01:01:58.892171 kubelet[2614]: I0304 01:01:58.892119 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/647e7528-0953-4a64-bfd7-3fcac75551d3-nginx-config\") pod \"whisker-5ff8d78c66-dvmmh\" (UID: \"647e7528-0953-4a64-bfd7-3fcac75551d3\") " pod="calico-system/whisker-5ff8d78c66-dvmmh" Mar 4 01:01:58.893171 kubelet[2614]: I0304 01:01:58.893033 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdsb\" (UniqueName: \"kubernetes.io/projected/647e7528-0953-4a64-bfd7-3fcac75551d3-kube-api-access-6xdsb\") pod \"whisker-5ff8d78c66-dvmmh\" (UID: \"647e7528-0953-4a64-bfd7-3fcac75551d3\") " pod="calico-system/whisker-5ff8d78c66-dvmmh" Mar 4 01:01:58.893171 kubelet[2614]: I0304 01:01:58.893083 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/647e7528-0953-4a64-bfd7-3fcac75551d3-whisker-ca-bundle\") pod \"whisker-5ff8d78c66-dvmmh\" (UID: \"647e7528-0953-4a64-bfd7-3fcac75551d3\") " pod="calico-system/whisker-5ff8d78c66-dvmmh" Mar 4 01:01:58.893171 kubelet[2614]: I0304 01:01:58.893110 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/647e7528-0953-4a64-bfd7-3fcac75551d3-whisker-backend-key-pair\") pod \"whisker-5ff8d78c66-dvmmh\" (UID: \"647e7528-0953-4a64-bfd7-3fcac75551d3\") " pod="calico-system/whisker-5ff8d78c66-dvmmh" Mar 4 01:01:58.931915 kubelet[2614]: I0304 01:01:58.931821 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47638dfc-43ad-4f79-9126-05bb92d9f07d" path="/var/lib/kubelet/pods/47638dfc-43ad-4f79-9126-05bb92d9f07d/volumes" Mar 4 01:01:59.114053 containerd[1466]: time="2026-03-04T01:01:59.113933440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:59.115173 containerd[1466]: time="2026-03-04T01:01:59.115042772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 4 01:01:59.115974 containerd[1466]: time="2026-03-04T01:01:59.115925544Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:59.118909 containerd[1466]: time="2026-03-04T01:01:59.118667483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:59.121549 containerd[1466]: time="2026-03-04T01:01:59.121502718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 870.719396ms" Mar 4 01:01:59.121933 containerd[1466]: time="2026-03-04T01:01:59.121554263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 4 01:01:59.126912 containerd[1466]: time="2026-03-04T01:01:59.126849969Z" level=info msg="CreateContainer within sandbox \"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 01:01:59.149876 containerd[1466]: time="2026-03-04T01:01:59.149814837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff8d78c66-dvmmh,Uid:647e7528-0953-4a64-bfd7-3fcac75551d3,Namespace:calico-system,Attempt:0,}" Mar 4 01:01:59.152605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324309992.mount: Deactivated successfully. Mar 4 01:01:59.156060 containerd[1466]: time="2026-03-04T01:01:59.155515814Z" level=info msg="CreateContainer within sandbox \"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f2440efb1fa3b783111189f6e31765d46ec8bcbed781c2bb0e2f6154d9dad95a\"" Mar 4 01:01:59.157389 containerd[1466]: time="2026-03-04T01:01:59.157332257Z" level=info msg="StartContainer for \"f2440efb1fa3b783111189f6e31765d46ec8bcbed781c2bb0e2f6154d9dad95a\"" Mar 4 01:01:59.195347 kernel: calico-node[4075]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 01:01:59.228468 systemd[1]: Started cri-containerd-f2440efb1fa3b783111189f6e31765d46ec8bcbed781c2bb0e2f6154d9dad95a.scope - libcontainer container f2440efb1fa3b783111189f6e31765d46ec8bcbed781c2bb0e2f6154d9dad95a. Mar 4 01:01:59.406713 containerd[1466]: time="2026-03-04T01:01:59.405623551Z" level=info msg="StartContainer for \"f2440efb1fa3b783111189f6e31765d46ec8bcbed781c2bb0e2f6154d9dad95a\" returns successfully" Mar 4 01:01:59.424595 containerd[1466]: time="2026-03-04T01:01:59.424083328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 01:01:59.445361 systemd-networkd[1404]: cali24f4c67309f: Link UP Mar 4 01:01:59.451520 systemd-networkd[1404]: cali24f4c67309f: Gained carrier Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.258 [INFO][4200] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0 whisker-5ff8d78c66- calico-system 647e7528-0953-4a64-bfd7-3fcac75551d3 982 0 2026-03-04 01:01:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5ff8d78c66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5ff8d78c66-dvmmh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali24f4c67309f [] [] }} ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.259 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.326 [INFO][4230] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" HandleID="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Workload="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.339 [INFO][4230] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" HandleID="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Workload="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033e140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5ff8d78c66-dvmmh", "timestamp":"2026-03-04 01:01:59.326526097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000342000)} Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.339 [INFO][4230] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.340 [INFO][4230] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.340 [INFO][4230] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.345 [INFO][4230] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.363 [INFO][4230] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.371 [INFO][4230] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.376 [INFO][4230] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.380 [INFO][4230] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.380 [INFO][4230] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.383 [INFO][4230] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873 Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.396 [INFO][4230] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.427 [INFO][4230] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.428 [INFO][4230] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" host="localhost" Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.429 [INFO][4230] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:01:59.488625 containerd[1466]: 2026-03-04 01:01:59.429 [INFO][4230] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" HandleID="k8s-pod-network.87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Workload="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" Mar 4 01:01:59.490031 containerd[1466]: 2026-03-04 01:01:59.438 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0", GenerateName:"whisker-5ff8d78c66-", Namespace:"calico-system", SelfLink:"", UID:"647e7528-0953-4a64-bfd7-3fcac75551d3", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ff8d78c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5ff8d78c66-dvmmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali24f4c67309f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:01:59.490031 containerd[1466]: 2026-03-04 01:01:59.438 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" Mar 4 01:01:59.490031 containerd[1466]: 2026-03-04 01:01:59.438 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24f4c67309f ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" Mar 4 01:01:59.490031 containerd[1466]: 2026-03-04 01:01:59.448 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" Mar 4 01:01:59.490031 containerd[1466]: 2026-03-04 01:01:59.449 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0", GenerateName:"whisker-5ff8d78c66-", Namespace:"calico-system", SelfLink:"", UID:"647e7528-0953-4a64-bfd7-3fcac75551d3", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ff8d78c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873", Pod:"whisker-5ff8d78c66-dvmmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali24f4c67309f", MAC:"c6:5a:0f:62:23:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:01:59.490031 containerd[1466]: 2026-03-04 01:01:59.482 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873" Namespace="calico-system" Pod="whisker-5ff8d78c66-dvmmh" WorkloadEndpoint="localhost-k8s-whisker--5ff8d78c66--dvmmh-eth0" Mar 4 01:01:59.548850 containerd[1466]: time="2026-03-04T01:01:59.548611084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:59.550380 containerd[1466]: time="2026-03-04T01:01:59.548996421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:59.551643 containerd[1466]: time="2026-03-04T01:01:59.550309694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:59.551643 containerd[1466]: time="2026-03-04T01:01:59.550512901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:59.629515 systemd[1]: Started cri-containerd-87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873.scope - libcontainer container 87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873. Mar 4 01:01:59.665759 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:01:59.767592 containerd[1466]: time="2026-03-04T01:01:59.767543692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff8d78c66-dvmmh,Uid:647e7528-0953-4a64-bfd7-3fcac75551d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873\"" Mar 4 01:01:59.886925 systemd-networkd[1404]: cali1682198fdd6: Gained IPv6LL Mar 4 01:02:00.213421 systemd-networkd[1404]: vxlan.calico: Link UP Mar 4 01:02:00.213777 systemd-networkd[1404]: vxlan.calico: Gained carrier Mar 4 01:02:00.465072 containerd[1466]: time="2026-03-04T01:02:00.464833557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:00.466444 containerd[1466]: time="2026-03-04T01:02:00.465826256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 4 01:02:00.467403 containerd[1466]: time="2026-03-04T01:02:00.467350944Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:00.471478 containerd[1466]: time="2026-03-04T01:02:00.471418578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:00.472988 containerd[1466]: time="2026-03-04T01:02:00.472913096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.048539279s" Mar 4 01:02:00.473062 containerd[1466]: time="2026-03-04T01:02:00.472990540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 4 01:02:00.475193 containerd[1466]: time="2026-03-04T01:02:00.475169398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 01:02:00.480956 containerd[1466]: time="2026-03-04T01:02:00.480919948Z" level=info msg="CreateContainer within sandbox \"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 01:02:00.529673 containerd[1466]: time="2026-03-04T01:02:00.529603685Z" level=info msg="CreateContainer within sandbox \"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"12ea3f2983953f6bf0bc4f28767bea7995b7c4c7cbd68e4dd32b5b51ae362efe\"" Mar 4 01:02:00.532025 containerd[1466]: time="2026-03-04T01:02:00.530699294Z" level=info msg="StartContainer for \"12ea3f2983953f6bf0bc4f28767bea7995b7c4c7cbd68e4dd32b5b51ae362efe\"" Mar 4 01:02:00.582548 systemd[1]: run-containerd-runc-k8s.io-12ea3f2983953f6bf0bc4f28767bea7995b7c4c7cbd68e4dd32b5b51ae362efe-runc.tKCKwL.mount: Deactivated successfully. Mar 4 01:02:00.591637 systemd-networkd[1404]: cali24f4c67309f: Gained IPv6LL Mar 4 01:02:00.598532 systemd[1]: Started cri-containerd-12ea3f2983953f6bf0bc4f28767bea7995b7c4c7cbd68e4dd32b5b51ae362efe.scope - libcontainer container 12ea3f2983953f6bf0bc4f28767bea7995b7c4c7cbd68e4dd32b5b51ae362efe. Mar 4 01:02:00.687564 containerd[1466]: time="2026-03-04T01:02:00.687177873Z" level=info msg="StartContainer for \"12ea3f2983953f6bf0bc4f28767bea7995b7c4c7cbd68e4dd32b5b51ae362efe\" returns successfully" Mar 4 01:02:01.223369 containerd[1466]: time="2026-03-04T01:02:01.223147439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:01.224304 containerd[1466]: time="2026-03-04T01:02:01.224145905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 4 01:02:01.226095 containerd[1466]: time="2026-03-04T01:02:01.226049238Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:01.228998 containerd[1466]: time="2026-03-04T01:02:01.228925197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:01.230625 containerd[1466]: time="2026-03-04T01:02:01.230545792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 755.138431ms" Mar 4 01:02:01.230625 containerd[1466]: time="2026-03-04T01:02:01.230613158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 4 01:02:01.236838 containerd[1466]: time="2026-03-04T01:02:01.236726408Z" level=info msg="CreateContainer within sandbox \"87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 01:02:01.254519 containerd[1466]: time="2026-03-04T01:02:01.254429578Z" level=info msg="CreateContainer within sandbox \"87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"922d2a36bdc5a3079d948abadb43d9ff99749e1c0e44631364177a660ae0a8fc\"" Mar 4 01:02:01.255510 containerd[1466]: time="2026-03-04T01:02:01.255456312Z" level=info msg="StartContainer for \"922d2a36bdc5a3079d948abadb43d9ff99749e1c0e44631364177a660ae0a8fc\"" Mar 4 01:02:01.320854 systemd[1]: Started cri-containerd-922d2a36bdc5a3079d948abadb43d9ff99749e1c0e44631364177a660ae0a8fc.scope - libcontainer container 922d2a36bdc5a3079d948abadb43d9ff99749e1c0e44631364177a660ae0a8fc. Mar 4 01:02:01.382199 containerd[1466]: time="2026-03-04T01:02:01.382055529Z" level=info msg="StartContainer for \"922d2a36bdc5a3079d948abadb43d9ff99749e1c0e44631364177a660ae0a8fc\" returns successfully" Mar 4 01:02:01.391438 containerd[1466]: time="2026-03-04T01:02:01.387540711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 01:02:01.613568 kubelet[2614]: I0304 01:02:01.613496 2614 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 01:02:01.614979 kubelet[2614]: I0304 01:02:01.614868 2614 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 01:02:02.127918 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Mar 4 01:02:02.356986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171908187.mount: Deactivated successfully. Mar 4 01:02:02.380848 containerd[1466]: time="2026-03-04T01:02:02.380646665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:02.381964 containerd[1466]: time="2026-03-04T01:02:02.381809116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 4 01:02:02.383555 containerd[1466]: time="2026-03-04T01:02:02.383502574Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:02.387563 containerd[1466]: time="2026-03-04T01:02:02.387318671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:02.388664 containerd[1466]: time="2026-03-04T01:02:02.388558439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.000869442s" Mar 4 01:02:02.388728 containerd[1466]: time="2026-03-04T01:02:02.388664848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 4 01:02:02.404099 containerd[1466]: time="2026-03-04T01:02:02.403987284Z" level=info msg="CreateContainer within sandbox \"87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 01:02:02.421983 containerd[1466]: time="2026-03-04T01:02:02.421895454Z" level=info msg="CreateContainer within sandbox \"87d36242ab9807646238f08ff19196864b68861b2b8ade527005a9c165938873\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"88c9dcdad42bd05445ef7c4ea77d19602152b2646a6e99b2f8a397578850d815\"" Mar 4 01:02:02.422781 containerd[1466]: time="2026-03-04T01:02:02.422758017Z" level=info msg="StartContainer for \"88c9dcdad42bd05445ef7c4ea77d19602152b2646a6e99b2f8a397578850d815\"" Mar 4 01:02:02.465479 systemd[1]: Started cri-containerd-88c9dcdad42bd05445ef7c4ea77d19602152b2646a6e99b2f8a397578850d815.scope - libcontainer container 88c9dcdad42bd05445ef7c4ea77d19602152b2646a6e99b2f8a397578850d815. Mar 4 01:02:02.532709 containerd[1466]: time="2026-03-04T01:02:02.532593631Z" level=info msg="StartContainer for \"88c9dcdad42bd05445ef7c4ea77d19602152b2646a6e99b2f8a397578850d815\" returns successfully" Mar 4 01:02:02.742792 kubelet[2614]: I0304 01:02:02.742572 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2lkt8" podStartSLOduration=20.518900178 podStartE2EDuration="22.742552646s" podCreationTimestamp="2026-03-04 01:01:40 +0000 UTC" firstStartedPulling="2026-03-04 01:01:58.250503636 +0000 UTC m=+41.531791563" lastFinishedPulling="2026-03-04 01:02:00.474156104 +0000 UTC m=+43.755444031" observedRunningTime="2026-03-04 01:02:00.75183778 +0000 UTC m=+44.033125707" watchObservedRunningTime="2026-03-04 01:02:02.742552646 +0000 UTC m=+46.023840574" Mar 4 01:02:07.930960 containerd[1466]: time="2026-03-04T01:02:07.930766648Z" level=info msg="StopPodSandbox for \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\"" Mar 4 01:02:08.169063 kubelet[2614]: I0304 01:02:08.164957 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5ff8d78c66-dvmmh" podStartSLOduration=7.545686433 podStartE2EDuration="10.164934235s" podCreationTimestamp="2026-03-04 01:01:58 +0000 UTC" firstStartedPulling="2026-03-04 01:01:59.770776895 +0000 UTC m=+43.052064832" lastFinishedPulling="2026-03-04 01:02:02.390024707 +0000 UTC m=+45.671312634" observedRunningTime="2026-03-04 01:02:02.741890777 +0000 UTC m=+46.023178714" watchObservedRunningTime="2026-03-04 01:02:08.164934235 +0000 UTC m=+51.446222162" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.160 [INFO][4545] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.161 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" iface="eth0" netns="/var/run/netns/cni-10b5e653-87bf-7f5d-f638-ea2cf07f6319" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.165 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" iface="eth0" netns="/var/run/netns/cni-10b5e653-87bf-7f5d-f638-ea2cf07f6319" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.167 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" iface="eth0" netns="/var/run/netns/cni-10b5e653-87bf-7f5d-f638-ea2cf07f6319" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.167 [INFO][4545] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.167 [INFO][4545] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.321 [INFO][4553] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.324 [INFO][4553] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.324 [INFO][4553] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.352 [WARNING][4553] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.352 [INFO][4553] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.365 [INFO][4553] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:08.398810 containerd[1466]: 2026-03-04 01:02:08.387 [INFO][4545] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:08.411037 containerd[1466]: time="2026-03-04T01:02:08.410919620Z" level=info msg="TearDown network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\" successfully" Mar 4 01:02:08.411037 containerd[1466]: time="2026-03-04T01:02:08.411013905Z" level=info msg="StopPodSandbox for \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\" returns successfully" Mar 4 01:02:08.411677 kubelet[2614]: E0304 01:02:08.411591 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:08.417408 containerd[1466]: time="2026-03-04T01:02:08.415939049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqk6s,Uid:6eeda1a6-1f9b-42a9-8645-346f1f25f12e,Namespace:kube-system,Attempt:1,}" Mar 4 01:02:08.416579 systemd[1]: run-netns-cni\x2d10b5e653\x2d87bf\x2d7f5d\x2df638\x2dea2cf07f6319.mount: Deactivated successfully. Mar 4 01:02:08.937826 containerd[1466]: time="2026-03-04T01:02:08.936745821Z" level=info msg="StopPodSandbox for \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\"" Mar 4 01:02:08.942613 containerd[1466]: time="2026-03-04T01:02:08.939039954Z" level=info msg="StopPodSandbox for \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\"" Mar 4 01:02:09.083600 systemd-networkd[1404]: calid6b05a2b50f: Link UP Mar 4 01:02:09.100651 systemd-networkd[1404]: calid6b05a2b50f: Gained carrier Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.636 [INFO][4561] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0 coredns-674b8bbfcf- kube-system 6eeda1a6-1f9b-42a9-8645-346f1f25f12e 1034 0 2026-03-04 01:01:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jqk6s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid6b05a2b50f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.636 [INFO][4561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.746 [INFO][4577] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" HandleID="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.772 [INFO][4577] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" HandleID="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002760c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jqk6s", "timestamp":"2026-03-04 01:02:08.745762221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00043e420)} Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.772 [INFO][4577] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.772 [INFO][4577] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.772 [INFO][4577] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.786 [INFO][4577] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.810 [INFO][4577] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.865 [INFO][4577] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.886 [INFO][4577] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.900 [INFO][4577] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.900 [INFO][4577] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.916 [INFO][4577] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099 Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:08.956 [INFO][4577] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:09.008 [INFO][4577] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:09.011 [INFO][4577] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" host="localhost" Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:09.011 [INFO][4577] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:09.178203 containerd[1466]: 2026-03-04 01:02:09.011 [INFO][4577] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" HandleID="k8s-pod-network.4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:09.179600 containerd[1466]: 2026-03-04 01:02:09.022 [INFO][4561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6eeda1a6-1f9b-42a9-8645-346f1f25f12e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jqk6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6b05a2b50f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:09.179600 containerd[1466]: 2026-03-04 01:02:09.061 [INFO][4561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:09.179600 containerd[1466]: 2026-03-04 01:02:09.063 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6b05a2b50f ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:09.179600 containerd[1466]: 2026-03-04 01:02:09.093 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:09.179600 containerd[1466]: 2026-03-04 01:02:09.094 [INFO][4561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6eeda1a6-1f9b-42a9-8645-346f1f25f12e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099", Pod:"coredns-674b8bbfcf-jqk6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6b05a2b50f", MAC:"82:81:3c:14:44:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:09.179600 containerd[1466]: 2026-03-04 01:02:09.172 [INFO][4561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqk6s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:09.345098 containerd[1466]: time="2026-03-04T01:02:09.344674580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:09.346079 containerd[1466]: time="2026-03-04T01:02:09.345342833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:09.346079 containerd[1466]: time="2026-03-04T01:02:09.345482102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:09.349527 containerd[1466]: time="2026-03-04T01:02:09.346101655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:09.520699 systemd[1]: Started cri-containerd-4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099.scope - libcontainer container 4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099. Mar 4 01:02:09.603587 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.224 [INFO][4605] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.227 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" iface="eth0" netns="/var/run/netns/cni-3490cb49-edd1-d304-f26b-b34199cfd191" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.228 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" iface="eth0" netns="/var/run/netns/cni-3490cb49-edd1-d304-f26b-b34199cfd191" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.230 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" iface="eth0" netns="/var/run/netns/cni-3490cb49-edd1-d304-f26b-b34199cfd191" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.230 [INFO][4605] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.230 [INFO][4605] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.521 [INFO][4634] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.526 [INFO][4634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.526 [INFO][4634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.574 [WARNING][4634] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.575 [INFO][4634] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.592 [INFO][4634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:09.628480 containerd[1466]: 2026-03-04 01:02:09.603 [INFO][4605] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:09.629187 containerd[1466]: time="2026-03-04T01:02:09.629115822Z" level=info msg="TearDown network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\" successfully" Mar 4 01:02:09.629187 containerd[1466]: time="2026-03-04T01:02:09.629171195Z" level=info msg="StopPodSandbox for \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\" returns successfully" Mar 4 01:02:09.640552 systemd[1]: run-netns-cni\x2d3490cb49\x2dedd1\x2dd304\x2df26b\x2db34199cfd191.mount: Deactivated successfully. Mar 4 01:02:09.642432 containerd[1466]: time="2026-03-04T01:02:09.641416840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-47wlk,Uid:f2d78d17-4768-48c2-ae26-3e7f45451d5a,Namespace:calico-system,Attempt:1,}" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.426 [INFO][4610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.428 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" iface="eth0" netns="/var/run/netns/cni-22b33603-a2bd-3649-592e-3a10cd535442" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.436 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" iface="eth0" netns="/var/run/netns/cni-22b33603-a2bd-3649-592e-3a10cd535442" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.439 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" iface="eth0" netns="/var/run/netns/cni-22b33603-a2bd-3649-592e-3a10cd535442" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.439 [INFO][4610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.439 [INFO][4610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.608 [INFO][4662] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.609 [INFO][4662] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.616 [INFO][4662] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.644 [WARNING][4662] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.644 [INFO][4662] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.660 [INFO][4662] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:09.682514 containerd[1466]: 2026-03-04 01:02:09.667 [INFO][4610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:09.695481 containerd[1466]: time="2026-03-04T01:02:09.690840242Z" level=info msg="TearDown network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\" successfully" Mar 4 01:02:09.695481 containerd[1466]: time="2026-03-04T01:02:09.691062877Z" level=info msg="StopPodSandbox for \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\" returns successfully" Mar 4 01:02:09.697067 systemd[1]: run-netns-cni\x2d22b33603\x2da2bd\x2d3649\x2d592e\x2d3a10cd535442.mount: Deactivated successfully. Mar 4 01:02:09.706623 containerd[1466]: time="2026-03-04T01:02:09.706165336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-p2psc,Uid:3d0380e4-587c-4361-a5b6-a8c814a6baf0,Namespace:calico-system,Attempt:1,}" Mar 4 01:02:09.766470 containerd[1466]: time="2026-03-04T01:02:09.764160627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqk6s,Uid:6eeda1a6-1f9b-42a9-8645-346f1f25f12e,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099\"" Mar 4 01:02:09.784497 kubelet[2614]: E0304 01:02:09.784365 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:09.809904 containerd[1466]: time="2026-03-04T01:02:09.809716909Z" level=info msg="CreateContainer within sandbox \"4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:02:09.913826 containerd[1466]: time="2026-03-04T01:02:09.910649350Z" level=info msg="CreateContainer within sandbox \"4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf33402476093a944260939b1f08c5ef43e011b4da0a02fb51781327c6d532e8\"" Mar 4 01:02:09.919089 containerd[1466]: time="2026-03-04T01:02:09.919041987Z" level=info msg="StartContainer for \"bf33402476093a944260939b1f08c5ef43e011b4da0a02fb51781327c6d532e8\"" Mar 4 01:02:10.058670 systemd[1]: Started cri-containerd-bf33402476093a944260939b1f08c5ef43e011b4da0a02fb51781327c6d532e8.scope - libcontainer container bf33402476093a944260939b1f08c5ef43e011b4da0a02fb51781327c6d532e8. Mar 4 01:02:10.189208 containerd[1466]: time="2026-03-04T01:02:10.188821820Z" level=info msg="StartContainer for \"bf33402476093a944260939b1f08c5ef43e011b4da0a02fb51781327c6d532e8\" returns successfully" Mar 4 01:02:10.328890 systemd-networkd[1404]: cali42a6609be28: Link UP Mar 4 01:02:10.336634 systemd-networkd[1404]: cali42a6609be28: Gained carrier Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:09.989 [INFO][4697] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0 calico-apiserver-5fd449cb54- calico-system f2d78d17-4768-48c2-ae26-3e7f45451d5a 1043 0 2026-03-04 01:01:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fd449cb54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fd449cb54-47wlk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali42a6609be28 [] [] }} ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:09.990 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.112 [INFO][4748] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" HandleID="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.138 [INFO][4748] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" HandleID="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5fd449cb54-47wlk", "timestamp":"2026-03-04 01:02:10.11201174 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000726000)} Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.138 [INFO][4748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.139 [INFO][4748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.140 [INFO][4748] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.154 [INFO][4748] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.188 [INFO][4748] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.211 [INFO][4748] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.218 [INFO][4748] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.227 [INFO][4748] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.228 [INFO][4748] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.235 [INFO][4748] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.272 [INFO][4748] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.298 [INFO][4748] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.298 [INFO][4748] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" host="localhost" Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.298 [INFO][4748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:10.379747 containerd[1466]: 2026-03-04 01:02:10.298 [INFO][4748] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" HandleID="k8s-pod-network.2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:10.380780 containerd[1466]: 2026-03-04 01:02:10.312 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"f2d78d17-4768-48c2-ae26-3e7f45451d5a", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fd449cb54-47wlk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali42a6609be28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:10.380780 containerd[1466]: 2026-03-04 01:02:10.313 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:10.380780 containerd[1466]: 2026-03-04 01:02:10.314 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42a6609be28 ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:10.380780 containerd[1466]: 2026-03-04 01:02:10.321 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:10.380780 containerd[1466]: 2026-03-04 01:02:10.326 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"f2d78d17-4768-48c2-ae26-3e7f45451d5a", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a", Pod:"calico-apiserver-5fd449cb54-47wlk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali42a6609be28", MAC:"8a:bd:bf:f2:c0:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:10.380780 containerd[1466]: 2026-03-04 01:02:10.374 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-47wlk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:10.507878 containerd[1466]: time="2026-03-04T01:02:10.507145519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:10.507878 containerd[1466]: time="2026-03-04T01:02:10.507701243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:10.507878 containerd[1466]: time="2026-03-04T01:02:10.507765793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:10.508548 containerd[1466]: time="2026-03-04T01:02:10.508023864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:10.571716 systemd-networkd[1404]: cali4a32b26b4cb: Link UP Mar 4 01:02:10.590157 systemd-networkd[1404]: cali4a32b26b4cb: Gained carrier Mar 4 01:02:10.594989 systemd[1]: Started cri-containerd-2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a.scope - libcontainer container 2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a. Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:09.975 [INFO][4710] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--p2psc-eth0 goldmane-5b85766d88- calico-system 3d0380e4-587c-4361-a5b6-a8c814a6baf0 1045 0 2026-03-04 01:01:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-p2psc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4a32b26b4cb [] [] }} ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:09.985 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.152 [INFO][4746] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" HandleID="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.192 [INFO][4746] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" HandleID="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013b5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-p2psc", "timestamp":"2026-03-04 01:02:10.152115068 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001662c0)} Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.192 [INFO][4746] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.299 [INFO][4746] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.300 [INFO][4746] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.319 [INFO][4746] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.363 [INFO][4746] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.395 [INFO][4746] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.410 [INFO][4746] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.442 [INFO][4746] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.442 [INFO][4746] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.453 [INFO][4746] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206 Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.480 [INFO][4746] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.520 [INFO][4746] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.520 [INFO][4746] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" host="localhost" Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.522 [INFO][4746] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:10.652389 containerd[1466]: 2026-03-04 01:02:10.522 [INFO][4746] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" HandleID="k8s-pod-network.9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:10.653570 containerd[1466]: 2026-03-04 01:02:10.529 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--p2psc-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3d0380e4-587c-4361-a5b6-a8c814a6baf0", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-p2psc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a32b26b4cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:10.653570 containerd[1466]: 2026-03-04 01:02:10.533 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:10.653570 containerd[1466]: 2026-03-04 01:02:10.533 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a32b26b4cb ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:10.653570 containerd[1466]: 2026-03-04 01:02:10.589 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:10.653570 containerd[1466]: 2026-03-04 01:02:10.591 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--p2psc-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3d0380e4-587c-4361-a5b6-a8c814a6baf0", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206", Pod:"goldmane-5b85766d88-p2psc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a32b26b4cb", MAC:"22:65:d2:2e:a4:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:10.653570 containerd[1466]: 2026-03-04 01:02:10.629 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206" Namespace="calico-system" Pod="goldmane-5b85766d88-p2psc" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:10.700113 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:10.794934 containerd[1466]: time="2026-03-04T01:02:10.794161914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:10.794934 containerd[1466]: time="2026-03-04T01:02:10.794355894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:10.794934 containerd[1466]: time="2026-03-04T01:02:10.794406008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:10.799502 kubelet[2614]: E0304 01:02:10.797873 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:10.819514 containerd[1466]: time="2026-03-04T01:02:10.803417153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:10.952674 containerd[1466]: time="2026-03-04T01:02:10.951587712Z" level=info msg="StopPodSandbox for \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\"" Mar 4 01:02:10.964756 systemd-networkd[1404]: calid6b05a2b50f: Gained IPv6LL Mar 4 01:02:11.116952 kubelet[2614]: I0304 01:02:11.115582 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jqk6s" podStartSLOduration=46.115549988 podStartE2EDuration="46.115549988s" podCreationTimestamp="2026-03-04 01:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:10.927550078 +0000 UTC m=+54.208838035" watchObservedRunningTime="2026-03-04 01:02:11.115549988 +0000 UTC m=+54.396837925" Mar 4 01:02:11.118722 systemd[1]: Started cri-containerd-9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206.scope - libcontainer container 9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206. Mar 4 01:02:11.184848 containerd[1466]: time="2026-03-04T01:02:11.182084433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-47wlk,Uid:f2d78d17-4768-48c2-ae26-3e7f45451d5a,Namespace:calico-system,Attempt:1,} returns sandbox id \"2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a\"" Mar 4 01:02:11.198460 containerd[1466]: time="2026-03-04T01:02:11.198193001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:02:11.253498 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:11.385338 containerd[1466]: time="2026-03-04T01:02:11.383554986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-p2psc,Uid:3d0380e4-587c-4361-a5b6-a8c814a6baf0,Namespace:calico-system,Attempt:1,} returns sandbox id \"9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206\"" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.402 [INFO][4905] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.404 [INFO][4905] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" iface="eth0" netns="/var/run/netns/cni-bb35b39e-73d2-fb35-a9a4-b5f446b54575" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.405 [INFO][4905] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" iface="eth0" netns="/var/run/netns/cni-bb35b39e-73d2-fb35-a9a4-b5f446b54575" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.406 [INFO][4905] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" iface="eth0" netns="/var/run/netns/cni-bb35b39e-73d2-fb35-a9a4-b5f446b54575" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.406 [INFO][4905] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.406 [INFO][4905] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.504 [INFO][4932] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.505 [INFO][4932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.505 [INFO][4932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.544 [WARNING][4932] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.544 [INFO][4932] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.551 [INFO][4932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:11.563031 containerd[1466]: 2026-03-04 01:02:11.557 [INFO][4905] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:11.574090 containerd[1466]: time="2026-03-04T01:02:11.565717343Z" level=info msg="TearDown network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\" successfully" Mar 4 01:02:11.574090 containerd[1466]: time="2026-03-04T01:02:11.565765994Z" level=info msg="StopPodSandbox for \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\" returns successfully" Mar 4 01:02:11.574090 containerd[1466]: time="2026-03-04T01:02:11.570940161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-f4dhr,Uid:38597b56-fbbb-4af6-ab46-5447e9d3191f,Namespace:calico-system,Attempt:1,}" Mar 4 01:02:11.574159 systemd[1]: run-netns-cni\x2dbb35b39e\x2d73d2\x2dfb35\x2da9a4\x2db5f446b54575.mount: Deactivated successfully. Mar 4 01:02:11.856159 kubelet[2614]: E0304 01:02:11.855976 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:11.925407 systemd-networkd[1404]: cali42a6609be28: Gained IPv6LL Mar 4 01:02:11.929539 systemd-networkd[1404]: cali4a32b26b4cb: Gained IPv6LL Mar 4 01:02:11.943791 containerd[1466]: time="2026-03-04T01:02:11.940690050Z" level=info msg="StopPodSandbox for \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\"" Mar 4 01:02:11.955580 containerd[1466]: time="2026-03-04T01:02:11.955419613Z" level=info msg="StopPodSandbox for \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\"" Mar 4 01:02:12.623013 systemd-networkd[1404]: cali0e42a86c2c3: Link UP Mar 4 01:02:12.623461 systemd-networkd[1404]: cali0e42a86c2c3: Gained carrier Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:11.869 [INFO][4940] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0 calico-apiserver-5fd449cb54- calico-system 38597b56-fbbb-4af6-ab46-5447e9d3191f 1069 0 2026-03-04 01:01:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fd449cb54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fd449cb54-f4dhr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0e42a86c2c3 [] [] }} ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:11.869 [INFO][4940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.192 [INFO][4954] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" HandleID="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.261 [INFO][4954] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" HandleID="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001163b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5fd449cb54-f4dhr", "timestamp":"2026-03-04 01:02:12.192037768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005ac000)} Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.261 [INFO][4954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.261 [INFO][4954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.261 [INFO][4954] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.280 [INFO][4954] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.308 [INFO][4954] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.372 [INFO][4954] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.391 [INFO][4954] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.411 [INFO][4954] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.411 [INFO][4954] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.442 [INFO][4954] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906 Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.470 [INFO][4954] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.539 [INFO][4954] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.539 [INFO][4954] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" host="localhost" Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.545 [INFO][4954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:12.758796 containerd[1466]: 2026-03-04 01:02:12.546 [INFO][4954] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" HandleID="k8s-pod-network.338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:12.772682 containerd[1466]: 2026-03-04 01:02:12.594 [INFO][4940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"38597b56-fbbb-4af6-ab46-5447e9d3191f", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fd449cb54-f4dhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e42a86c2c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:12.772682 containerd[1466]: 2026-03-04 01:02:12.595 [INFO][4940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:12.772682 containerd[1466]: 2026-03-04 01:02:12.595 [INFO][4940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e42a86c2c3 ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:12.772682 containerd[1466]: 2026-03-04 01:02:12.610 [INFO][4940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:12.772682 containerd[1466]: 2026-03-04 01:02:12.625 [INFO][4940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"38597b56-fbbb-4af6-ab46-5447e9d3191f", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906", Pod:"calico-apiserver-5fd449cb54-f4dhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e42a86c2c3", MAC:"86:7c:ee:c0:cb:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:12.772682 containerd[1466]: 2026-03-04 01:02:12.701 [INFO][4940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906" Namespace="calico-system" Pod="calico-apiserver-5fd449cb54-f4dhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:12.864646 kubelet[2614]: E0304 01:02:12.864181 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:12.531 [INFO][4980] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:12.533 [INFO][4980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" iface="eth0" netns="/var/run/netns/cni-97523118-5e32-000c-edd4-8c937ee83665" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:12.534 [INFO][4980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" iface="eth0" netns="/var/run/netns/cni-97523118-5e32-000c-edd4-8c937ee83665" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:12.552 [INFO][4980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" iface="eth0" netns="/var/run/netns/cni-97523118-5e32-000c-edd4-8c937ee83665" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:12.552 [INFO][4980] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:12.552 [INFO][4980] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:13.030 [INFO][5008] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:13.030 [INFO][5008] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:13.030 [INFO][5008] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:13.047 [WARNING][5008] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:13.047 [INFO][5008] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:13.062 [INFO][5008] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:13.139691 containerd[1466]: 2026-03-04 01:02:13.093 [INFO][4980] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:13.148403 systemd[1]: run-netns-cni\x2d97523118\x2d5e32\x2d000c\x2dedd4\x2d8c937ee83665.mount: Deactivated successfully. Mar 4 01:02:13.178395 containerd[1466]: time="2026-03-04T01:02:13.165496702Z" level=info msg="TearDown network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\" successfully" Mar 4 01:02:13.178395 containerd[1466]: time="2026-03-04T01:02:13.175681190Z" level=info msg="StopPodSandbox for \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\" returns successfully" Mar 4 01:02:13.178395 containerd[1466]: time="2026-03-04T01:02:13.178154393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56874799f8-qqwn4,Uid:17b10246-700e-4e39-9a06-ab5fa1ad9082,Namespace:calico-system,Attempt:1,}" Mar 4 01:02:13.193022 containerd[1466]: time="2026-03-04T01:02:13.184073499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:13.214250 containerd[1466]: time="2026-03-04T01:02:13.190378494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:13.214250 containerd[1466]: time="2026-03-04T01:02:13.190434148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:13.214250 containerd[1466]: time="2026-03-04T01:02:13.190611368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:13.354825 systemd[1]: Started cri-containerd-338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906.scope - libcontainer container 338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906. Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:12.562 [INFO][4990] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:12.563 [INFO][4990] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" iface="eth0" netns="/var/run/netns/cni-6382283d-024d-ae72-a07e-89ad84c88df5" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:12.567 [INFO][4990] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" iface="eth0" netns="/var/run/netns/cni-6382283d-024d-ae72-a07e-89ad84c88df5" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:12.573 [INFO][4990] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" iface="eth0" netns="/var/run/netns/cni-6382283d-024d-ae72-a07e-89ad84c88df5" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:12.573 [INFO][4990] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:12.573 [INFO][4990] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:13.174 [INFO][5014] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:13.175 [INFO][5014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:13.175 [INFO][5014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:13.255 [WARNING][5014] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:13.260 [INFO][5014] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:13.310 [INFO][5014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:13.411953 containerd[1466]: 2026-03-04 01:02:13.350 [INFO][4990] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:13.420080 systemd[1]: run-netns-cni\x2d6382283d\x2d024d\x2dae72\x2da07e\x2d89ad84c88df5.mount: Deactivated successfully. Mar 4 01:02:13.451411 containerd[1466]: time="2026-03-04T01:02:13.446818155Z" level=info msg="TearDown network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\" successfully" Mar 4 01:02:13.451411 containerd[1466]: time="2026-03-04T01:02:13.447042714Z" level=info msg="StopPodSandbox for \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\" returns successfully" Mar 4 01:02:13.451658 kubelet[2614]: E0304 01:02:13.447623 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:13.472404 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:13.479866 containerd[1466]: time="2026-03-04T01:02:13.479746544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wjjp,Uid:968ce120-7ba4-48cc-a851-6001d23f80bd,Namespace:kube-system,Attempt:1,}" Mar 4 01:02:13.722575 containerd[1466]: time="2026-03-04T01:02:13.719669212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd449cb54-f4dhr,Uid:38597b56-fbbb-4af6-ab46-5447e9d3191f,Namespace:calico-system,Attempt:1,} returns sandbox id \"338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906\"" Mar 4 01:02:13.841434 systemd-networkd[1404]: cali0e42a86c2c3: Gained IPv6LL Mar 4 01:02:14.244962 systemd-networkd[1404]: calia14faed2b9b: Link UP Mar 4 01:02:14.264625 systemd-networkd[1404]: calia14faed2b9b: Gained carrier Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.776 [INFO][5097] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0 coredns-674b8bbfcf- kube-system 968ce120-7ba4-48cc-a851-6001d23f80bd 1080 0 2026-03-04 01:01:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6wjjp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia14faed2b9b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.777 [INFO][5097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.942 [INFO][5126] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" HandleID="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.960 [INFO][5126] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" HandleID="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eff40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6wjjp", "timestamp":"2026-03-04 01:02:13.942371872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004f6b00)} Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.962 [INFO][5126] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.962 [INFO][5126] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.962 [INFO][5126] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:13.979 [INFO][5126] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.018 [INFO][5126] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.053 [INFO][5126] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.062 [INFO][5126] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.081 [INFO][5126] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.082 [INFO][5126] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.104 [INFO][5126] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8 Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.136 [INFO][5126] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.209 [INFO][5126] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.209 [INFO][5126] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" host="localhost" Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.209 [INFO][5126] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:14.530496 containerd[1466]: 2026-03-04 01:02:14.209 [INFO][5126] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" HandleID="k8s-pod-network.ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:14.533490 containerd[1466]: 2026-03-04 01:02:14.235 [INFO][5097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"968ce120-7ba4-48cc-a851-6001d23f80bd", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6wjjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia14faed2b9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:14.533490 containerd[1466]: 2026-03-04 01:02:14.236 [INFO][5097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:14.533490 containerd[1466]: 2026-03-04 01:02:14.236 [INFO][5097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia14faed2b9b ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:14.533490 containerd[1466]: 2026-03-04 01:02:14.240 [INFO][5097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:14.533490 containerd[1466]: 2026-03-04 01:02:14.274 [INFO][5097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"968ce120-7ba4-48cc-a851-6001d23f80bd", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8", Pod:"coredns-674b8bbfcf-6wjjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia14faed2b9b", MAC:"fe:05:06:e8:3d:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:14.533490 containerd[1466]: 2026-03-04 01:02:14.453 [INFO][5097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6wjjp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:14.831150 containerd[1466]: time="2026-03-04T01:02:14.827666834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:14.831150 containerd[1466]: time="2026-03-04T01:02:14.827767702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:14.831150 containerd[1466]: time="2026-03-04T01:02:14.827788981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:14.831150 containerd[1466]: time="2026-03-04T01:02:14.827933902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:14.856486 systemd-networkd[1404]: cali831abddf84b: Link UP Mar 4 01:02:14.863785 systemd-networkd[1404]: cali831abddf84b: Gained carrier Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:13.766 [INFO][5085] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0 calico-kube-controllers-56874799f8- calico-system 17b10246-700e-4e39-9a06-ab5fa1ad9082 1079 0 2026-03-04 01:01:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56874799f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56874799f8-qqwn4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali831abddf84b [] [] }} ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:13.767 [INFO][5085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.010 [INFO][5120] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" HandleID="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.054 [INFO][5120] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" HandleID="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000121d50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56874799f8-qqwn4", "timestamp":"2026-03-04 01:02:14.010290037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00028e000)} Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.054 [INFO][5120] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.212 [INFO][5120] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.212 [INFO][5120] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.269 [INFO][5120] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.415 [INFO][5120] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.473 [INFO][5120] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.501 [INFO][5120] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.596 [INFO][5120] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.605 [INFO][5120] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.647 [INFO][5120] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.715 [INFO][5120] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.791 [INFO][5120] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.793 [INFO][5120] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" host="localhost" Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.793 [INFO][5120] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:14.928817 containerd[1466]: 2026-03-04 01:02:14.793 [INFO][5120] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" HandleID="k8s-pod-network.85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:14.930831 containerd[1466]: 2026-03-04 01:02:14.809 [INFO][5085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0", GenerateName:"calico-kube-controllers-56874799f8-", Namespace:"calico-system", SelfLink:"", UID:"17b10246-700e-4e39-9a06-ab5fa1ad9082", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56874799f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56874799f8-qqwn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali831abddf84b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:14.930831 containerd[1466]: 2026-03-04 01:02:14.810 [INFO][5085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:14.930831 containerd[1466]: 2026-03-04 01:02:14.810 [INFO][5085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali831abddf84b ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:14.930831 containerd[1466]: 2026-03-04 01:02:14.863 [INFO][5085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:14.930831 containerd[1466]: 2026-03-04 01:02:14.865 [INFO][5085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0", GenerateName:"calico-kube-controllers-56874799f8-", Namespace:"calico-system", SelfLink:"", UID:"17b10246-700e-4e39-9a06-ab5fa1ad9082", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56874799f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab", Pod:"calico-kube-controllers-56874799f8-qqwn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali831abddf84b", MAC:"06:a6:b0:1f:a5:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:14.930831 containerd[1466]: 2026-03-04 01:02:14.910 [INFO][5085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab" Namespace="calico-system" Pod="calico-kube-controllers-56874799f8-qqwn4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:14.981761 systemd[1]: Started cri-containerd-ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8.scope - libcontainer container ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8. Mar 4 01:02:15.065936 containerd[1466]: time="2026-03-04T01:02:15.065409645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:15.065936 containerd[1466]: time="2026-03-04T01:02:15.065529798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:15.065936 containerd[1466]: time="2026-03-04T01:02:15.065549495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:15.066422 containerd[1466]: time="2026-03-04T01:02:15.065695346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:15.067562 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:15.149083 systemd[1]: run-containerd-runc-k8s.io-85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab-runc.RmKgFB.mount: Deactivated successfully. Mar 4 01:02:15.164690 systemd[1]: Started cri-containerd-85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab.scope - libcontainer container 85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab. Mar 4 01:02:15.174787 containerd[1466]: time="2026-03-04T01:02:15.172965924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wjjp,Uid:968ce120-7ba4-48cc-a851-6001d23f80bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8\"" Mar 4 01:02:15.177338 kubelet[2614]: E0304 01:02:15.176105 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:15.208467 containerd[1466]: time="2026-03-04T01:02:15.208394223Z" level=info msg="CreateContainer within sandbox \"ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:02:15.348042 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:15.375796 systemd-networkd[1404]: calia14faed2b9b: Gained IPv6LL Mar 4 01:02:15.394452 containerd[1466]: time="2026-03-04T01:02:15.389713881Z" level=info msg="CreateContainer within sandbox \"ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8192663dc995c043c533364addc259646cb9f17b7198d298ebfd4fff3a16eb3\"" Mar 4 01:02:15.407546 containerd[1466]: time="2026-03-04T01:02:15.405934481Z" level=info msg="StartContainer for \"a8192663dc995c043c533364addc259646cb9f17b7198d298ebfd4fff3a16eb3\"" Mar 4 01:02:15.521359 containerd[1466]: time="2026-03-04T01:02:15.520055249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56874799f8-qqwn4,Uid:17b10246-700e-4e39-9a06-ab5fa1ad9082,Namespace:calico-system,Attempt:1,} returns sandbox id \"85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab\"" Mar 4 01:02:15.608124 systemd[1]: Started cri-containerd-a8192663dc995c043c533364addc259646cb9f17b7198d298ebfd4fff3a16eb3.scope - libcontainer container a8192663dc995c043c533364addc259646cb9f17b7198d298ebfd4fff3a16eb3. Mar 4 01:02:15.802658 containerd[1466]: time="2026-03-04T01:02:15.796962424Z" level=info msg="StartContainer for \"a8192663dc995c043c533364addc259646cb9f17b7198d298ebfd4fff3a16eb3\" returns successfully" Mar 4 01:02:15.959044 kubelet[2614]: E0304 01:02:15.955846 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:16.003720 kubelet[2614]: I0304 01:02:16.003606 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6wjjp" podStartSLOduration=51.00358308 podStartE2EDuration="51.00358308s" podCreationTimestamp="2026-03-04 01:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:15.999385456 +0000 UTC m=+59.280673423" watchObservedRunningTime="2026-03-04 01:02:16.00358308 +0000 UTC m=+59.284871017" Mar 4 01:02:16.785155 systemd-networkd[1404]: cali831abddf84b: Gained IPv6LL Mar 4 01:02:16.880748 containerd[1466]: time="2026-03-04T01:02:16.880694432Z" level=info msg="StopPodSandbox for \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\"" Mar 4 01:02:16.968354 kubelet[2614]: E0304 01:02:16.968149 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.075 [WARNING][5322] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"38597b56-fbbb-4af6-ab46-5447e9d3191f", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906", Pod:"calico-apiserver-5fd449cb54-f4dhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e42a86c2c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.076 [INFO][5322] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.076 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" iface="eth0" netns="" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.076 [INFO][5322] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.076 [INFO][5322] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.224 [INFO][5333] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.224 [INFO][5333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.225 [INFO][5333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.259 [WARNING][5333] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.259 [INFO][5333] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.267 [INFO][5333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:17.283419 containerd[1466]: 2026-03-04 01:02:17.275 [INFO][5322] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:17.283419 containerd[1466]: time="2026-03-04T01:02:17.282940915Z" level=info msg="TearDown network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\" successfully" Mar 4 01:02:17.283419 containerd[1466]: time="2026-03-04T01:02:17.282989025Z" level=info msg="StopPodSandbox for \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\" returns successfully" Mar 4 01:02:17.331838 containerd[1466]: time="2026-03-04T01:02:17.329588663Z" level=info msg="RemovePodSandbox for \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\"" Mar 4 01:02:17.348843 containerd[1466]: time="2026-03-04T01:02:17.348766193Z" level=info msg="Forcibly stopping sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\"" Mar 4 01:02:17.977353 kubelet[2614]: E0304 01:02:17.973924 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.715 [WARNING][5351] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"38597b56-fbbb-4af6-ab46-5447e9d3191f", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906", Pod:"calico-apiserver-5fd449cb54-f4dhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e42a86c2c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.715 [INFO][5351] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.715 [INFO][5351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" iface="eth0" netns="" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.715 [INFO][5351] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.715 [INFO][5351] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.865 [INFO][5362] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.865 [INFO][5362] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.866 [INFO][5362] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.924 [WARNING][5362] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.925 [INFO][5362] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" HandleID="k8s-pod-network.3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Workload="localhost-k8s-calico--apiserver--5fd449cb54--f4dhr-eth0" Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:17.990 [INFO][5362] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:18.016477 containerd[1466]: 2026-03-04 01:02:18.006 [INFO][5351] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229" Mar 4 01:02:18.016477 containerd[1466]: time="2026-03-04T01:02:18.015726012Z" level=info msg="TearDown network for sandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\" successfully" Mar 4 01:02:18.091961 containerd[1466]: time="2026-03-04T01:02:18.091780896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:18.092167 containerd[1466]: time="2026-03-04T01:02:18.091989093Z" level=info msg="RemovePodSandbox \"3b72b831620acdfa302cdf3fcf5e200801aa8daccdabf0a055a17feaa431f229\" returns successfully" Mar 4 01:02:18.096609 containerd[1466]: time="2026-03-04T01:02:18.094124770Z" level=info msg="StopPodSandbox for \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\"" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.349 [WARNING][5378] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"968ce120-7ba4-48cc-a851-6001d23f80bd", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8", Pod:"coredns-674b8bbfcf-6wjjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia14faed2b9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.355 [INFO][5378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.355 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" iface="eth0" netns="" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.356 [INFO][5378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.356 [INFO][5378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.564 [INFO][5386] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.571 [INFO][5386] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.572 [INFO][5386] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.637 [WARNING][5386] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.638 [INFO][5386] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.655 [INFO][5386] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:18.678866 containerd[1466]: 2026-03-04 01:02:18.668 [INFO][5378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:18.678866 containerd[1466]: time="2026-03-04T01:02:18.677809647Z" level=info msg="TearDown network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\" successfully" Mar 4 01:02:18.678866 containerd[1466]: time="2026-03-04T01:02:18.677848220Z" level=info msg="StopPodSandbox for \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\" returns successfully" Mar 4 01:02:18.681952 containerd[1466]: time="2026-03-04T01:02:18.681845221Z" level=info msg="RemovePodSandbox for \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\"" Mar 4 01:02:18.681952 containerd[1466]: time="2026-03-04T01:02:18.681943794Z" level=info msg="Forcibly stopping sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\"" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:18.911 [WARNING][5404] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"968ce120-7ba4-48cc-a851-6001d23f80bd", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec0b1a85b50c4d3f3c1f5f393f6c269520e15e14e59f656e1c86618b627611e8", Pod:"coredns-674b8bbfcf-6wjjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia14faed2b9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:18.912 [INFO][5404] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:18.912 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" iface="eth0" netns="" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:18.912 [INFO][5404] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:18.912 [INFO][5404] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:19.061 [INFO][5413] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:19.062 [INFO][5413] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:19.062 [INFO][5413] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:19.107 [WARNING][5413] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:19.107 [INFO][5413] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" HandleID="k8s-pod-network.724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Workload="localhost-k8s-coredns--674b8bbfcf--6wjjp-eth0" Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:19.113 [INFO][5413] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:19.130363 containerd[1466]: 2026-03-04 01:02:19.118 [INFO][5404] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48" Mar 4 01:02:19.130363 containerd[1466]: time="2026-03-04T01:02:19.128825531Z" level=info msg="TearDown network for sandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\" successfully" Mar 4 01:02:19.145025 containerd[1466]: time="2026-03-04T01:02:19.144954997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:19.145456 containerd[1466]: time="2026-03-04T01:02:19.145421526Z" level=info msg="RemovePodSandbox \"724f0727c3bb6be54f03f0d35f6b9de804a2a3425ac499698370eb9cd1a38e48\" returns successfully" Mar 4 01:02:19.146555 containerd[1466]: time="2026-03-04T01:02:19.146396090Z" level=info msg="StopPodSandbox for \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\"" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.304 [WARNING][5432] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6eeda1a6-1f9b-42a9-8645-346f1f25f12e", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099", Pod:"coredns-674b8bbfcf-jqk6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6b05a2b50f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.305 [INFO][5432] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.305 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" iface="eth0" netns="" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.306 [INFO][5432] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.306 [INFO][5432] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.422 [INFO][5440] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.422 [INFO][5440] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.422 [INFO][5440] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.473 [WARNING][5440] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.473 [INFO][5440] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.507 [INFO][5440] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:19.538415 containerd[1466]: 2026-03-04 01:02:19.525 [INFO][5432] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.538415 containerd[1466]: time="2026-03-04T01:02:19.538188152Z" level=info msg="TearDown network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\" successfully" Mar 4 01:02:19.538415 containerd[1466]: time="2026-03-04T01:02:19.538312663Z" level=info msg="StopPodSandbox for \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\" returns successfully" Mar 4 01:02:19.542695 containerd[1466]: time="2026-03-04T01:02:19.541623819Z" level=info msg="RemovePodSandbox for \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\"" Mar 4 01:02:19.542695 containerd[1466]: time="2026-03-04T01:02:19.541669603Z" level=info msg="Forcibly stopping sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\"" Mar 4 01:02:19.885355 containerd[1466]: time="2026-03-04T01:02:19.883827324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:19.885801 containerd[1466]: time="2026-03-04T01:02:19.885750114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 4 01:02:19.901420 containerd[1466]: time="2026-03-04T01:02:19.898606533Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:19.910947 containerd[1466]: time="2026-03-04T01:02:19.908993692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:19.910947 containerd[1466]: time="2026-03-04T01:02:19.910813940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 8.712373359s" Mar 4 01:02:19.910947 containerd[1466]: time="2026-03-04T01:02:19.910858885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:02:19.923387 containerd[1466]: time="2026-03-04T01:02:19.922147822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 01:02:19.936859 containerd[1466]: time="2026-03-04T01:02:19.936626007Z" level=info msg="CreateContainer within sandbox \"2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.742 [WARNING][5457] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6eeda1a6-1f9b-42a9-8645-346f1f25f12e", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a8e4db29ff1f2e684f6c563945617e0bc7394feb275cf5591415a0ea399c099", Pod:"coredns-674b8bbfcf-jqk6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6b05a2b50f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.742 [INFO][5457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.742 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" iface="eth0" netns="" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.742 [INFO][5457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.742 [INFO][5457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.891 [INFO][5466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.892 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.892 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.922 [WARNING][5466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.922 [INFO][5466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" HandleID="k8s-pod-network.2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Workload="localhost-k8s-coredns--674b8bbfcf--jqk6s-eth0" Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.930 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:19.949943 containerd[1466]: 2026-03-04 01:02:19.940 [INFO][5457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917" Mar 4 01:02:19.949943 containerd[1466]: time="2026-03-04T01:02:19.947423387Z" level=info msg="TearDown network for sandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\" successfully" Mar 4 01:02:19.969501 containerd[1466]: time="2026-03-04T01:02:19.969203626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:19.970066 containerd[1466]: time="2026-03-04T01:02:19.970035204Z" level=info msg="RemovePodSandbox \"2ffbafdad778f79242c55a6efdc0d1a11b1e0b9d3a68f1220a83a0dc99713917\" returns successfully" Mar 4 01:02:19.974347 containerd[1466]: time="2026-03-04T01:02:19.971013797Z" level=info msg="StopPodSandbox for \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\"" Mar 4 01:02:19.999875 containerd[1466]: time="2026-03-04T01:02:19.999766528Z" level=info msg="CreateContainer within sandbox \"2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9ac2defc119a78fd27935137de9c930fe6de6d7ac100f10f150d931ee85bfea7\"" Mar 4 01:02:20.001176 containerd[1466]: time="2026-03-04T01:02:20.001069429Z" level=info msg="StartContainer for \"9ac2defc119a78fd27935137de9c930fe6de6d7ac100f10f150d931ee85bfea7\"" Mar 4 01:02:20.092041 systemd[1]: run-containerd-runc-k8s.io-9ac2defc119a78fd27935137de9c930fe6de6d7ac100f10f150d931ee85bfea7-runc.C62y6q.mount: Deactivated successfully. Mar 4 01:02:20.108751 systemd[1]: Started cri-containerd-9ac2defc119a78fd27935137de9c930fe6de6d7ac100f10f150d931ee85bfea7.scope - libcontainer container 9ac2defc119a78fd27935137de9c930fe6de6d7ac100f10f150d931ee85bfea7. Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.116 [WARNING][5488] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"f2d78d17-4768-48c2-ae26-3e7f45451d5a", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a", Pod:"calico-apiserver-5fd449cb54-47wlk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali42a6609be28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.116 [INFO][5488] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.116 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" iface="eth0" netns="" Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.116 [INFO][5488] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.116 [INFO][5488] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.196 [INFO][5517] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.197 [INFO][5517] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.197 [INFO][5517] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.227 [WARNING][5517] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.228 [INFO][5517] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.237 [INFO][5517] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:20.264371 containerd[1466]: 2026-03-04 01:02:20.249 [INFO][5488] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.264371 containerd[1466]: time="2026-03-04T01:02:20.264156790Z" level=info msg="TearDown network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\" successfully" Mar 4 01:02:20.264371 containerd[1466]: time="2026-03-04T01:02:20.264192696Z" level=info msg="StopPodSandbox for \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\" returns successfully" Mar 4 01:02:20.272041 containerd[1466]: time="2026-03-04T01:02:20.271976178Z" level=info msg="RemovePodSandbox for \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\"" Mar 4 01:02:20.272041 containerd[1466]: time="2026-03-04T01:02:20.272039667Z" level=info msg="Forcibly stopping sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\"" Mar 4 01:02:20.308035 containerd[1466]: time="2026-03-04T01:02:20.307568526Z" level=info msg="StartContainer for \"9ac2defc119a78fd27935137de9c930fe6de6d7ac100f10f150d931ee85bfea7\" returns successfully" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.453 [WARNING][5547] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0", GenerateName:"calico-apiserver-5fd449cb54-", Namespace:"calico-system", SelfLink:"", UID:"f2d78d17-4768-48c2-ae26-3e7f45451d5a", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd449cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2140e332e36a984be3dede60863f88c10080e5033f1b6afb12d6da38aba49d8a", Pod:"calico-apiserver-5fd449cb54-47wlk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali42a6609be28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.454 [INFO][5547] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.454 [INFO][5547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" iface="eth0" netns="" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.454 [INFO][5547] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.454 [INFO][5547] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.570 [INFO][5562] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.572 [INFO][5562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.572 [INFO][5562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.606 [WARNING][5562] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.606 [INFO][5562] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" HandleID="k8s-pod-network.7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Workload="localhost-k8s-calico--apiserver--5fd449cb54--47wlk-eth0" Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.616 [INFO][5562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:20.633608 containerd[1466]: 2026-03-04 01:02:20.623 [INFO][5547] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0" Mar 4 01:02:20.634661 containerd[1466]: time="2026-03-04T01:02:20.633660881Z" level=info msg="TearDown network for sandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\" successfully" Mar 4 01:02:20.662725 containerd[1466]: time="2026-03-04T01:02:20.661788115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:20.662725 containerd[1466]: time="2026-03-04T01:02:20.661925531Z" level=info msg="RemovePodSandbox \"7d58f72ad9354dcb75fd3c8dc63f527ea337a7de578b1d075a8d034d2e3a3af0\" returns successfully" Mar 4 01:02:20.663819 containerd[1466]: time="2026-03-04T01:02:20.663016822Z" level=info msg="StopPodSandbox for \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\"" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:20.913 [WARNING][5586] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0", GenerateName:"calico-kube-controllers-56874799f8-", Namespace:"calico-system", SelfLink:"", UID:"17b10246-700e-4e39-9a06-ab5fa1ad9082", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56874799f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab", Pod:"calico-kube-controllers-56874799f8-qqwn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali831abddf84b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:20.915 [INFO][5586] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:20.915 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" iface="eth0" netns="" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:20.915 [INFO][5586] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:20.915 [INFO][5586] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:21.296 [INFO][5605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:21.298 [INFO][5605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:21.298 [INFO][5605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:21.337 [WARNING][5605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:21.337 [INFO][5605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:21.356 [INFO][5605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:21.404676 containerd[1466]: 2026-03-04 01:02:21.379 [INFO][5586] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.404676 containerd[1466]: time="2026-03-04T01:02:21.402805089Z" level=info msg="TearDown network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\" successfully" Mar 4 01:02:21.404676 containerd[1466]: time="2026-03-04T01:02:21.402838541Z" level=info msg="StopPodSandbox for \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\" returns successfully" Mar 4 01:02:21.406194 containerd[1466]: time="2026-03-04T01:02:21.405757476Z" level=info msg="RemovePodSandbox for \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\"" Mar 4 01:02:21.406194 containerd[1466]: time="2026-03-04T01:02:21.405798262Z" level=info msg="Forcibly stopping sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\"" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.634 [WARNING][5624] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0", GenerateName:"calico-kube-controllers-56874799f8-", Namespace:"calico-system", SelfLink:"", UID:"17b10246-700e-4e39-9a06-ab5fa1ad9082", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56874799f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab", Pod:"calico-kube-controllers-56874799f8-qqwn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali831abddf84b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.634 [INFO][5624] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.634 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" iface="eth0" netns="" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.635 [INFO][5624] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.635 [INFO][5624] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.730 [INFO][5632] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.731 [INFO][5632] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.731 [INFO][5632] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.759 [WARNING][5632] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.759 [INFO][5632] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" HandleID="k8s-pod-network.97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Workload="localhost-k8s-calico--kube--controllers--56874799f8--qqwn4-eth0" Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.798 [INFO][5632] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:21.821480 containerd[1466]: 2026-03-04 01:02:21.809 [INFO][5624] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8" Mar 4 01:02:21.821480 containerd[1466]: time="2026-03-04T01:02:21.818447398Z" level=info msg="TearDown network for sandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\" successfully" Mar 4 01:02:21.838138 containerd[1466]: time="2026-03-04T01:02:21.836961004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:21.839323 containerd[1466]: time="2026-03-04T01:02:21.839004949Z" level=info msg="RemovePodSandbox \"97fce6911f846717dc1ed55ddddfda9d2a724d2a4a31af6d7331756d56d820e8\" returns successfully" Mar 4 01:02:21.842754 containerd[1466]: time="2026-03-04T01:02:21.842616480Z" level=info msg="StopPodSandbox for \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\"" Mar 4 01:02:22.118661 kubelet[2614]: I0304 01:02:22.118617 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.052 [WARNING][5650] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--p2psc-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3d0380e4-587c-4361-a5b6-a8c814a6baf0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206", Pod:"goldmane-5b85766d88-p2psc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a32b26b4cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.053 [INFO][5650] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.053 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" iface="eth0" netns="" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.053 [INFO][5650] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.053 [INFO][5650] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.173 [INFO][5660] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.174 [INFO][5660] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.174 [INFO][5660] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.233 [WARNING][5660] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.233 [INFO][5660] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.253 [INFO][5660] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:22.281672 containerd[1466]: 2026-03-04 01:02:22.265 [INFO][5650] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.281672 containerd[1466]: time="2026-03-04T01:02:22.281273262Z" level=info msg="TearDown network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\" successfully" Mar 4 01:02:22.281672 containerd[1466]: time="2026-03-04T01:02:22.281358791Z" level=info msg="StopPodSandbox for \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\" returns successfully" Mar 4 01:02:22.284439 containerd[1466]: time="2026-03-04T01:02:22.283012976Z" level=info msg="RemovePodSandbox for \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\"" Mar 4 01:02:22.284439 containerd[1466]: time="2026-03-04T01:02:22.283053311Z" level=info msg="Forcibly stopping sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\"" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.514 [WARNING][5681] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--p2psc-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3d0380e4-587c-4361-a5b6-a8c814a6baf0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206", Pod:"goldmane-5b85766d88-p2psc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a32b26b4cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.514 [INFO][5681] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.514 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" iface="eth0" netns="" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.514 [INFO][5681] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.515 [INFO][5681] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.581 [INFO][5690] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.581 [INFO][5690] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.581 [INFO][5690] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.611 [WARNING][5690] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.611 [INFO][5690] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" HandleID="k8s-pod-network.f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Workload="localhost-k8s-goldmane--5b85766d88--p2psc-eth0" Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.615 [INFO][5690] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:22.624717 containerd[1466]: 2026-03-04 01:02:22.620 [INFO][5681] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4" Mar 4 01:02:22.624717 containerd[1466]: time="2026-03-04T01:02:22.624609182Z" level=info msg="TearDown network for sandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\" successfully" Mar 4 01:02:22.645369 containerd[1466]: time="2026-03-04T01:02:22.645276254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:22.645949 containerd[1466]: time="2026-03-04T01:02:22.645756819Z" level=info msg="RemovePodSandbox \"f3f3a860ef4b65ccf58460d320e1e002682bf921ad3497ac521d67f0ed167ee4\" returns successfully" Mar 4 01:02:22.646991 containerd[1466]: time="2026-03-04T01:02:22.646874136Z" level=info msg="StopPodSandbox for \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\"" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.775 [WARNING][5707] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" WorkloadEndpoint="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.776 [INFO][5707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.776 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" iface="eth0" netns="" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.776 [INFO][5707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.776 [INFO][5707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.866 [INFO][5716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.867 [INFO][5716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.867 [INFO][5716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.878 [WARNING][5716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.878 [INFO][5716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.882 [INFO][5716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:22.932495 containerd[1466]: 2026-03-04 01:02:22.898 [INFO][5707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:22.936366 containerd[1466]: time="2026-03-04T01:02:22.934544751Z" level=info msg="TearDown network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\" successfully" Mar 4 01:02:22.936366 containerd[1466]: time="2026-03-04T01:02:22.934601346Z" level=info msg="StopPodSandbox for \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\" returns successfully" Mar 4 01:02:22.937677 containerd[1466]: time="2026-03-04T01:02:22.937639333Z" level=info msg="RemovePodSandbox for \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\"" Mar 4 01:02:22.937806 containerd[1466]: time="2026-03-04T01:02:22.937782148Z" level=info msg="Forcibly stopping sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\"" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.093 [WARNING][5734] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" WorkloadEndpoint="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.094 [INFO][5734] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.094 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" iface="eth0" netns="" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.094 [INFO][5734] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.094 [INFO][5734] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.175 [INFO][5743] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.175 [INFO][5743] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.175 [INFO][5743] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.194 [WARNING][5743] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.195 [INFO][5743] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" HandleID="k8s-pod-network.bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Workload="localhost-k8s-whisker--5548dddc45--46nf7-eth0" Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.204 [INFO][5743] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:23.216940 containerd[1466]: 2026-03-04 01:02:23.210 [INFO][5734] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b" Mar 4 01:02:23.219025 containerd[1466]: time="2026-03-04T01:02:23.217759440Z" level=info msg="TearDown network for sandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\" successfully" Mar 4 01:02:23.229586 containerd[1466]: time="2026-03-04T01:02:23.229442101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:23.229586 containerd[1466]: time="2026-03-04T01:02:23.229572084Z" level=info msg="RemovePodSandbox \"bfb40da5d647e2fb8e3dbc03273d1c0fa76fc1500ca2e55f886101df6b75eb3b\" returns successfully" Mar 4 01:02:23.231691 containerd[1466]: time="2026-03-04T01:02:23.231616381Z" level=info msg="StopPodSandbox for \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\"" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.344 [WARNING][5761] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lkt8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5349a34-0a7e-48e8-966b-ab286041115e", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620", Pod:"csi-node-driver-2lkt8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1682198fdd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.345 [INFO][5761] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.345 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" iface="eth0" netns="" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.345 [INFO][5761] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.345 [INFO][5761] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.464 [INFO][5769] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.464 [INFO][5769] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.464 [INFO][5769] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.505 [WARNING][5769] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.505 [INFO][5769] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.516 [INFO][5769] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:23.526554 containerd[1466]: 2026-03-04 01:02:23.522 [INFO][5761] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.527943 containerd[1466]: time="2026-03-04T01:02:23.527637502Z" level=info msg="TearDown network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\" successfully" Mar 4 01:02:23.527943 containerd[1466]: time="2026-03-04T01:02:23.527685502Z" level=info msg="StopPodSandbox for \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\" returns successfully" Mar 4 01:02:23.530009 containerd[1466]: time="2026-03-04T01:02:23.529956879Z" level=info msg="RemovePodSandbox for \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\"" Mar 4 01:02:23.530169 containerd[1466]: time="2026-03-04T01:02:23.530022641Z" level=info msg="Forcibly stopping sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\"" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.646 [WARNING][5786] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lkt8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5349a34-0a7e-48e8-966b-ab286041115e", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e3f9c4737d4f49f9bbb85e061b7dd3a3738400d01ba2c288c1f03fa9adb3620", Pod:"csi-node-driver-2lkt8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1682198fdd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.646 [INFO][5786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.646 [INFO][5786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" iface="eth0" netns="" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.647 [INFO][5786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.647 [INFO][5786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.744 [INFO][5795] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.745 [INFO][5795] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.745 [INFO][5795] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.762 [WARNING][5795] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.762 [INFO][5795] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" HandleID="k8s-pod-network.4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Workload="localhost-k8s-csi--node--driver--2lkt8-eth0" Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.768 [INFO][5795] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:02:23.778899 containerd[1466]: 2026-03-04 01:02:23.774 [INFO][5786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727" Mar 4 01:02:23.778899 containerd[1466]: time="2026-03-04T01:02:23.778785876Z" level=info msg="TearDown network for sandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\" successfully" Mar 4 01:02:23.873807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030913748.mount: Deactivated successfully. Mar 4 01:02:23.922391 containerd[1466]: time="2026-03-04T01:02:23.921732612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:02:23.922391 containerd[1466]: time="2026-03-04T01:02:23.921869607Z" level=info msg="RemovePodSandbox \"4e4e5d67342ad453cb167491e82d7c761d60a71674013fd75cab16364a215727\" returns successfully" Mar 4 01:02:24.293737 kubelet[2614]: I0304 01:02:24.293593 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:02:24.344186 kubelet[2614]: I0304 01:02:24.342458 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fd449cb54-47wlk" podStartSLOduration=36.618369993 podStartE2EDuration="45.342426608s" podCreationTimestamp="2026-03-04 01:01:39 +0000 UTC" firstStartedPulling="2026-03-04 01:02:11.192462947 +0000 UTC m=+54.473750885" lastFinishedPulling="2026-03-04 01:02:19.916519563 +0000 UTC m=+63.197807500" observedRunningTime="2026-03-04 01:02:21.176994334 +0000 UTC m=+64.458282281" watchObservedRunningTime="2026-03-04 01:02:24.342426608 +0000 UTC m=+67.623714545" Mar 4 01:02:25.711039 containerd[1466]: time="2026-03-04T01:02:25.707747779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:25.723835 containerd[1466]: time="2026-03-04T01:02:25.721187857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 4 01:02:25.732583 containerd[1466]: time="2026-03-04T01:02:25.732442545Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:25.749359 containerd[1466]: time="2026-03-04T01:02:25.749169735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:25.766753 containerd[1466]: time="2026-03-04T01:02:25.766637898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.844305333s" Mar 4 01:02:25.766753 containerd[1466]: time="2026-03-04T01:02:25.766752151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 4 01:02:25.771951 containerd[1466]: time="2026-03-04T01:02:25.771859922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:02:25.797878 containerd[1466]: time="2026-03-04T01:02:25.797778255Z" level=info msg="CreateContainer within sandbox \"9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 01:02:25.879784 containerd[1466]: time="2026-03-04T01:02:25.879577311Z" level=info msg="CreateContainer within sandbox \"9190ac9c88c3111c65b5f2d67d7eb0f6cfb9cf825cd92d9cbfe5d19cd6d7e206\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"76f077a153744df3a7164dfc003813197593d23aeffb3fbffed6ed4126a492b7\"" Mar 4 01:02:25.886140 containerd[1466]: time="2026-03-04T01:02:25.881721658Z" level=info msg="StartContainer for \"76f077a153744df3a7164dfc003813197593d23aeffb3fbffed6ed4126a492b7\"" Mar 4 01:02:26.009492 containerd[1466]: time="2026-03-04T01:02:26.005722633Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:26.009492 containerd[1466]: time="2026-03-04T01:02:26.005799596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 01:02:26.016608 containerd[1466]: time="2026-03-04T01:02:26.016549455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 244.630063ms" Mar 4 01:02:26.016833 containerd[1466]: time="2026-03-04T01:02:26.016809610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:02:26.090523 containerd[1466]: time="2026-03-04T01:02:26.089976085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 01:02:26.157930 containerd[1466]: time="2026-03-04T01:02:26.156628596Z" level=info msg="CreateContainer within sandbox \"338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:02:26.238516 containerd[1466]: time="2026-03-04T01:02:26.238416916Z" level=info msg="CreateContainer within sandbox \"338ef7abe3f57fc75ddd27cd9dd2ceb5ab63d4cabfc6238cd6a2fc861a91c906\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d37648fcfba53cbe8fb8265748fa83ef1657f8803bf96fd8d79305b5618cdaa2\"" Mar 4 01:02:26.245275 containerd[1466]: time="2026-03-04T01:02:26.242687364Z" level=info msg="StartContainer for \"d37648fcfba53cbe8fb8265748fa83ef1657f8803bf96fd8d79305b5618cdaa2\"" Mar 4 01:02:26.245993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290755079.mount: Deactivated successfully. Mar 4 01:02:26.272709 systemd[1]: Started cri-containerd-76f077a153744df3a7164dfc003813197593d23aeffb3fbffed6ed4126a492b7.scope - libcontainer container 76f077a153744df3a7164dfc003813197593d23aeffb3fbffed6ed4126a492b7. Mar 4 01:02:26.358893 systemd[1]: Started cri-containerd-d37648fcfba53cbe8fb8265748fa83ef1657f8803bf96fd8d79305b5618cdaa2.scope - libcontainer container d37648fcfba53cbe8fb8265748fa83ef1657f8803bf96fd8d79305b5618cdaa2. Mar 4 01:02:26.446477 containerd[1466]: time="2026-03-04T01:02:26.445695796Z" level=info msg="StartContainer for \"76f077a153744df3a7164dfc003813197593d23aeffb3fbffed6ed4126a492b7\" returns successfully" Mar 4 01:02:26.502402 containerd[1466]: time="2026-03-04T01:02:26.502186268Z" level=info msg="StartContainer for \"d37648fcfba53cbe8fb8265748fa83ef1657f8803bf96fd8d79305b5618cdaa2\" returns successfully" Mar 4 01:02:26.936079 kubelet[2614]: E0304 01:02:26.934193 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:27.329175 kubelet[2614]: I0304 01:02:27.328806 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fd449cb54-f4dhr" podStartSLOduration=36.010630778 podStartE2EDuration="48.328775892s" podCreationTimestamp="2026-03-04 01:01:39 +0000 UTC" firstStartedPulling="2026-03-04 01:02:13.744367661 +0000 UTC m=+57.025655588" lastFinishedPulling="2026-03-04 01:02:26.062512765 +0000 UTC m=+69.343800702" observedRunningTime="2026-03-04 01:02:27.258874578 +0000 UTC m=+70.540162525" watchObservedRunningTime="2026-03-04 01:02:27.328775892 +0000 UTC m=+70.610063849" Mar 4 01:02:27.343129 kubelet[2614]: I0304 01:02:27.342788 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-p2psc" podStartSLOduration=33.966506866 podStartE2EDuration="48.342549284s" podCreationTimestamp="2026-03-04 01:01:39 +0000 UTC" firstStartedPulling="2026-03-04 01:02:11.393791848 +0000 UTC m=+54.675079775" lastFinishedPulling="2026-03-04 01:02:25.769834246 +0000 UTC m=+69.051122193" observedRunningTime="2026-03-04 01:02:27.323932129 +0000 UTC m=+70.605220066" watchObservedRunningTime="2026-03-04 01:02:27.342549284 +0000 UTC m=+70.623837210" Mar 4 01:02:28.235738 kubelet[2614]: I0304 01:02:28.235698 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:02:28.425095 systemd[1]: run-containerd-runc-k8s.io-76f077a153744df3a7164dfc003813197593d23aeffb3fbffed6ed4126a492b7-runc.AVgODl.mount: Deactivated successfully. Mar 4 01:02:34.576119 containerd[1466]: time="2026-03-04T01:02:34.575818484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:34.577705 containerd[1466]: time="2026-03-04T01:02:34.577551565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 4 01:02:34.583581 containerd[1466]: time="2026-03-04T01:02:34.583386261Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:34.589183 containerd[1466]: time="2026-03-04T01:02:34.588872703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:34.611295 containerd[1466]: time="2026-03-04T01:02:34.590460218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 8.50036438s" Mar 4 01:02:34.611295 containerd[1466]: time="2026-03-04T01:02:34.590505903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 4 01:02:34.741473 containerd[1466]: time="2026-03-04T01:02:34.741146125Z" level=info msg="CreateContainer within sandbox \"85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 01:02:34.852542 containerd[1466]: time="2026-03-04T01:02:34.851914907Z" level=info msg="CreateContainer within sandbox \"85eb5893906387f657c9cc5e264568befa30d4668662b3f64f8891a9eeb798ab\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cd5787dff1e6676af9c0f4b803e2a51a5b675a656841a9816cc01838e1033274\"" Mar 4 01:02:34.857777 containerd[1466]: time="2026-03-04T01:02:34.857623431Z" level=info msg="StartContainer for \"cd5787dff1e6676af9c0f4b803e2a51a5b675a656841a9816cc01838e1033274\"" Mar 4 01:02:34.945881 kubelet[2614]: E0304 01:02:34.935854 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:35.040125 systemd[1]: Started cri-containerd-cd5787dff1e6676af9c0f4b803e2a51a5b675a656841a9816cc01838e1033274.scope - libcontainer container cd5787dff1e6676af9c0f4b803e2a51a5b675a656841a9816cc01838e1033274. Mar 4 01:02:35.255979 containerd[1466]: time="2026-03-04T01:02:35.252109560Z" level=info msg="StartContainer for \"cd5787dff1e6676af9c0f4b803e2a51a5b675a656841a9816cc01838e1033274\" returns successfully" Mar 4 01:02:35.474932 kubelet[2614]: I0304 01:02:35.474726 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56874799f8-qqwn4" podStartSLOduration=36.406613333 podStartE2EDuration="55.474702096s" podCreationTimestamp="2026-03-04 01:01:40 +0000 UTC" firstStartedPulling="2026-03-04 01:02:15.52483841 +0000 UTC m=+58.806126337" lastFinishedPulling="2026-03-04 01:02:34.592927173 +0000 UTC m=+77.874215100" observedRunningTime="2026-03-04 01:02:35.473728703 +0000 UTC m=+78.755016650" watchObservedRunningTime="2026-03-04 01:02:35.474702096 +0000 UTC m=+78.755990043" Mar 4 01:02:37.934994 kubelet[2614]: E0304 01:02:37.933706 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:46.558070 systemd[1]: Started sshd@9-10.0.0.41:22-10.0.0.1:56482.service - OpenSSH per-connection server daemon (10.0.0.1:56482). Mar 4 01:02:46.867646 sshd[6099]: Accepted publickey for core from 10.0.0.1 port 56482 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:02:46.877827 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:46.932560 systemd-logind[1456]: New session 10 of user core. Mar 4 01:02:46.937674 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:02:47.907357 sshd[6099]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:47.912939 systemd[1]: sshd@9-10.0.0.41:22-10.0.0.1:56482.service: Deactivated successfully. Mar 4 01:02:47.916776 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:02:47.923056 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:02:47.925717 systemd-logind[1456]: Removed session 10. Mar 4 01:02:51.818044 kubelet[2614]: I0304 01:02:51.810822 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:02:52.968849 systemd[1]: Started sshd@10-10.0.0.41:22-10.0.0.1:53588.service - OpenSSH per-connection server daemon (10.0.0.1:53588). Mar 4 01:02:53.040197 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 53588 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:02:53.043540 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:53.053903 systemd-logind[1456]: New session 11 of user core. Mar 4 01:02:53.065848 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:02:53.298683 sshd[6119]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:53.305791 systemd[1]: sshd@10-10.0.0.41:22-10.0.0.1:53588.service: Deactivated successfully. Mar 4 01:02:53.309713 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:02:53.311390 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:02:53.313914 systemd-logind[1456]: Removed session 11. Mar 4 01:02:54.934542 kubelet[2614]: E0304 01:02:54.931758 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:58.319592 systemd[1]: Started sshd@11-10.0.0.41:22-10.0.0.1:53594.service - OpenSSH per-connection server daemon (10.0.0.1:53594). Mar 4 01:02:58.496947 sshd[6176]: Accepted publickey for core from 10.0.0.1 port 53594 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:02:58.501097 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:58.513389 systemd-logind[1456]: New session 12 of user core. Mar 4 01:02:58.523727 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:02:58.813753 sshd[6176]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:58.820900 systemd[1]: sshd@11-10.0.0.41:22-10.0.0.1:53594.service: Deactivated successfully. Mar 4 01:02:58.824059 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:02:58.827919 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:02:58.830649 systemd-logind[1456]: Removed session 12. Mar 4 01:03:04.725152 systemd[1]: Started sshd@12-10.0.0.41:22-10.0.0.1:43764.service - OpenSSH per-connection server daemon (10.0.0.1:43764). Mar 4 01:03:04.970191 kubelet[2614]: E0304 01:03:04.969953 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.147s" Mar 4 01:03:06.185551 kubelet[2614]: E0304 01:03:06.170843 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:06.227020 sshd[6205]: Accepted publickey for core from 10.0.0.1 port 43764 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:06.226764 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:06.342388 systemd-logind[1456]: New session 13 of user core. Mar 4 01:03:06.358365 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:03:06.939830 sshd[6205]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:06.971871 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:03:06.973635 systemd[1]: sshd@12-10.0.0.41:22-10.0.0.1:43764.service: Deactivated successfully. Mar 4 01:03:06.984798 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:03:07.015445 systemd-logind[1456]: Removed session 13. Mar 4 01:03:11.961499 systemd[1]: Started sshd@13-10.0.0.41:22-10.0.0.1:59016.service - OpenSSH per-connection server daemon (10.0.0.1:59016). Mar 4 01:03:12.080666 sshd[6265]: Accepted publickey for core from 10.0.0.1 port 59016 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:12.088134 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:12.114506 systemd-logind[1456]: New session 14 of user core. Mar 4 01:03:12.125993 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:03:12.451390 sshd[6265]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:12.458908 systemd[1]: sshd@13-10.0.0.41:22-10.0.0.1:59016.service: Deactivated successfully. Mar 4 01:03:12.465182 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:03:12.466835 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:03:12.470076 systemd-logind[1456]: Removed session 14. Mar 4 01:03:17.519619 systemd[1]: Started sshd@14-10.0.0.41:22-10.0.0.1:59030.service - OpenSSH per-connection server daemon (10.0.0.1:59030). Mar 4 01:03:17.642353 sshd[6284]: Accepted publickey for core from 10.0.0.1 port 59030 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:17.645077 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:17.655633 systemd-logind[1456]: New session 15 of user core. Mar 4 01:03:17.666692 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:03:17.910101 sshd[6284]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:17.916640 systemd[1]: sshd@14-10.0.0.41:22-10.0.0.1:59030.service: Deactivated successfully. Mar 4 01:03:17.920949 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:03:17.927533 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:03:17.929684 systemd-logind[1456]: Removed session 15. Mar 4 01:03:22.947943 systemd[1]: Started sshd@15-10.0.0.41:22-10.0.0.1:41256.service - OpenSSH per-connection server daemon (10.0.0.1:41256). Mar 4 01:03:23.057967 sshd[6310]: Accepted publickey for core from 10.0.0.1 port 41256 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:23.062191 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:23.083810 systemd-logind[1456]: New session 16 of user core. Mar 4 01:03:23.091183 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:03:23.458326 sshd[6310]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:23.471065 systemd[1]: sshd@15-10.0.0.41:22-10.0.0.1:41256.service: Deactivated successfully. Mar 4 01:03:23.475124 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:03:23.485800 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:03:23.506290 systemd-logind[1456]: Removed session 16. Mar 4 01:03:28.351534 systemd[1]: run-containerd-runc-k8s.io-823cac8ed40b0007ae5fef2e78c858d68582c3aaa08ad2d08171618e014753ac-runc.j5yFU4.mount: Deactivated successfully. Mar 4 01:03:28.519413 systemd[1]: Started sshd@16-10.0.0.41:22-10.0.0.1:41260.service - OpenSSH per-connection server daemon (10.0.0.1:41260). Mar 4 01:03:28.621793 sshd[6374]: Accepted publickey for core from 10.0.0.1 port 41260 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:28.627432 sshd[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:28.647730 systemd-logind[1456]: New session 17 of user core. Mar 4 01:03:28.654870 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:03:29.075566 sshd[6374]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:29.093349 systemd[1]: sshd@16-10.0.0.41:22-10.0.0.1:41260.service: Deactivated successfully. Mar 4 01:03:29.109171 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:03:29.114444 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:03:29.122904 systemd-logind[1456]: Removed session 17. Mar 4 01:03:30.930019 kubelet[2614]: E0304 01:03:30.929439 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:34.136558 systemd[1]: Started sshd@17-10.0.0.41:22-10.0.0.1:38114.service - OpenSSH per-connection server daemon (10.0.0.1:38114). Mar 4 01:03:34.205972 sshd[6412]: Accepted publickey for core from 10.0.0.1 port 38114 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:34.209812 sshd[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:34.229464 systemd-logind[1456]: New session 18 of user core. Mar 4 01:03:34.241128 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:03:34.649716 sshd[6412]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:34.669960 systemd[1]: sshd@17-10.0.0.41:22-10.0.0.1:38114.service: Deactivated successfully. Mar 4 01:03:34.675456 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:03:34.690604 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:03:34.720894 systemd-logind[1456]: Removed session 18. Mar 4 01:03:39.676320 systemd[1]: Started sshd@18-10.0.0.41:22-10.0.0.1:59326.service - OpenSSH per-connection server daemon (10.0.0.1:59326). Mar 4 01:03:39.832889 sshd[6487]: Accepted publickey for core from 10.0.0.1 port 59326 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:39.836802 sshd[6487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:39.851392 systemd-logind[1456]: New session 19 of user core. Mar 4 01:03:39.859493 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:03:40.169110 sshd[6487]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:40.191011 systemd[1]: sshd@18-10.0.0.41:22-10.0.0.1:59326.service: Deactivated successfully. Mar 4 01:03:40.196171 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:03:40.202394 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:03:40.206663 systemd-logind[1456]: Removed session 19. Mar 4 01:03:43.930045 kubelet[2614]: E0304 01:03:43.927635 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:44.927096 kubelet[2614]: E0304 01:03:44.926977 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:45.200820 systemd[1]: Started sshd@19-10.0.0.41:22-10.0.0.1:59328.service - OpenSSH per-connection server daemon (10.0.0.1:59328). Mar 4 01:03:45.249448 sshd[6523]: Accepted publickey for core from 10.0.0.1 port 59328 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:45.253094 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:45.276794 systemd-logind[1456]: New session 20 of user core. Mar 4 01:03:45.295642 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:03:45.520663 sshd[6523]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:45.538824 systemd[1]: sshd@19-10.0.0.41:22-10.0.0.1:59328.service: Deactivated successfully. Mar 4 01:03:45.542182 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:03:45.547473 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:03:45.553849 systemd[1]: Started sshd@20-10.0.0.41:22-10.0.0.1:59338.service - OpenSSH per-connection server daemon (10.0.0.1:59338). Mar 4 01:03:45.556187 systemd-logind[1456]: Removed session 20. Mar 4 01:03:45.655499 sshd[6538]: Accepted publickey for core from 10.0.0.1 port 59338 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:45.658831 sshd[6538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:45.674337 systemd-logind[1456]: New session 21 of user core. Mar 4 01:03:45.698877 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:03:46.110342 sshd[6538]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:46.127115 systemd[1]: sshd@20-10.0.0.41:22-10.0.0.1:59338.service: Deactivated successfully. Mar 4 01:03:46.133529 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:03:46.137055 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:03:46.166079 systemd[1]: Started sshd@21-10.0.0.41:22-10.0.0.1:59354.service - OpenSSH per-connection server daemon (10.0.0.1:59354). Mar 4 01:03:46.171838 systemd-logind[1456]: Removed session 21. Mar 4 01:03:46.370123 sshd[6550]: Accepted publickey for core from 10.0.0.1 port 59354 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:46.373315 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:46.413540 systemd-logind[1456]: New session 22 of user core. Mar 4 01:03:46.427105 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:03:46.720471 sshd[6550]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:46.732910 systemd[1]: sshd@21-10.0.0.41:22-10.0.0.1:59354.service: Deactivated successfully. Mar 4 01:03:46.741924 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:03:46.745064 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:03:46.747528 systemd-logind[1456]: Removed session 22. Mar 4 01:03:46.936367 kubelet[2614]: E0304 01:03:46.929171 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:51.805005 systemd[1]: Started sshd@22-10.0.0.41:22-10.0.0.1:39218.service - OpenSSH per-connection server daemon (10.0.0.1:39218). Mar 4 01:03:52.053629 sshd[6565]: Accepted publickey for core from 10.0.0.1 port 39218 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:52.061103 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:52.072173 systemd-logind[1456]: New session 23 of user core. Mar 4 01:03:52.085428 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:03:52.751477 sshd[6565]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:52.767602 systemd[1]: sshd@22-10.0.0.41:22-10.0.0.1:39218.service: Deactivated successfully. Mar 4 01:03:52.783193 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:03:52.794156 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:03:52.797104 systemd-logind[1456]: Removed session 23. Mar 4 01:03:57.778079 systemd[1]: Started sshd@23-10.0.0.41:22-10.0.0.1:39226.service - OpenSSH per-connection server daemon (10.0.0.1:39226). Mar 4 01:03:57.826116 sshd[6579]: Accepted publickey for core from 10.0.0.1 port 39226 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:57.829102 sshd[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:57.856854 systemd-logind[1456]: New session 24 of user core. Mar 4 01:03:57.869116 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:03:58.113866 sshd[6579]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:58.132093 systemd[1]: sshd@23-10.0.0.41:22-10.0.0.1:39226.service: Deactivated successfully. Mar 4 01:03:58.135408 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:03:58.139520 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:03:58.149596 systemd[1]: Started sshd@24-10.0.0.41:22-10.0.0.1:39242.service - OpenSSH per-connection server daemon (10.0.0.1:39242). Mar 4 01:03:58.155203 systemd-logind[1456]: Removed session 24. Mar 4 01:03:58.208757 sshd[6596]: Accepted publickey for core from 10.0.0.1 port 39242 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:58.212207 sshd[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:58.229124 systemd-logind[1456]: New session 25 of user core. Mar 4 01:03:58.240838 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:03:58.930584 kubelet[2614]: E0304 01:03:58.930038 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:59.130647 sshd[6596]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:59.149455 systemd[1]: Started sshd@25-10.0.0.41:22-10.0.0.1:37100.service - OpenSSH per-connection server daemon (10.0.0.1:37100). Mar 4 01:03:59.152946 systemd[1]: sshd@24-10.0.0.41:22-10.0.0.1:39242.service: Deactivated successfully. Mar 4 01:03:59.162079 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:03:59.164337 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:03:59.167725 systemd-logind[1456]: Removed session 25. Mar 4 01:03:59.298443 sshd[6651]: Accepted publickey for core from 10.0.0.1 port 37100 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:59.302053 sshd[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:59.319698 systemd-logind[1456]: New session 26 of user core. Mar 4 01:03:59.340612 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 01:04:00.842017 sshd[6651]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:00.863328 systemd[1]: sshd@25-10.0.0.41:22-10.0.0.1:37100.service: Deactivated successfully. Mar 4 01:04:00.867049 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 01:04:00.873369 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Mar 4 01:04:00.891944 systemd[1]: Started sshd@26-10.0.0.41:22-10.0.0.1:37112.service - OpenSSH per-connection server daemon (10.0.0.1:37112). Mar 4 01:04:00.895383 systemd-logind[1456]: Removed session 26. Mar 4 01:04:00.952585 sshd[6683]: Accepted publickey for core from 10.0.0.1 port 37112 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:00.953488 sshd[6683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:00.963946 systemd-logind[1456]: New session 27 of user core. Mar 4 01:04:00.972560 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 4 01:04:01.447322 sshd[6683]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:01.457798 systemd[1]: sshd@26-10.0.0.41:22-10.0.0.1:37112.service: Deactivated successfully. Mar 4 01:04:01.461734 systemd[1]: session-27.scope: Deactivated successfully. Mar 4 01:04:01.467728 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Mar 4 01:04:01.477333 systemd[1]: Started sshd@27-10.0.0.41:22-10.0.0.1:37118.service - OpenSSH per-connection server daemon (10.0.0.1:37118). Mar 4 01:04:01.478747 systemd-logind[1456]: Removed session 27. Mar 4 01:04:01.528735 sshd[6696]: Accepted publickey for core from 10.0.0.1 port 37118 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:01.532470 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:01.551053 systemd-logind[1456]: New session 28 of user core. Mar 4 01:04:01.567761 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 4 01:04:01.809430 sshd[6696]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:01.817027 systemd-logind[1456]: Session 28 logged out. Waiting for processes to exit. Mar 4 01:04:01.817912 systemd[1]: sshd@27-10.0.0.41:22-10.0.0.1:37118.service: Deactivated successfully. Mar 4 01:04:01.821653 systemd[1]: session-28.scope: Deactivated successfully. Mar 4 01:04:01.825970 systemd-logind[1456]: Removed session 28. Mar 4 01:04:06.848699 systemd[1]: Started sshd@28-10.0.0.41:22-10.0.0.1:37124.service - OpenSSH per-connection server daemon (10.0.0.1:37124). Mar 4 01:04:07.051895 sshd[6752]: Accepted publickey for core from 10.0.0.1 port 37124 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:07.056548 sshd[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:07.097445 systemd-logind[1456]: New session 29 of user core. Mar 4 01:04:07.116496 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 4 01:04:07.789308 sshd[6752]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:07.801725 systemd[1]: sshd@28-10.0.0.41:22-10.0.0.1:37124.service: Deactivated successfully. Mar 4 01:04:07.813823 systemd[1]: session-29.scope: Deactivated successfully. Mar 4 01:04:07.822769 systemd-logind[1456]: Session 29 logged out. Waiting for processes to exit. Mar 4 01:04:07.836621 systemd-logind[1456]: Removed session 29. Mar 4 01:04:12.835100 systemd[1]: Started sshd@29-10.0.0.41:22-10.0.0.1:42828.service - OpenSSH per-connection server daemon (10.0.0.1:42828). Mar 4 01:04:12.928442 sshd[6770]: Accepted publickey for core from 10.0.0.1 port 42828 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:12.937085 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:12.963316 systemd-logind[1456]: New session 30 of user core. Mar 4 01:04:12.982084 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 4 01:04:13.392578 sshd[6770]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:13.417457 systemd[1]: sshd@29-10.0.0.41:22-10.0.0.1:42828.service: Deactivated successfully. Mar 4 01:04:13.425833 systemd[1]: session-30.scope: Deactivated successfully. Mar 4 01:04:13.430044 systemd-logind[1456]: Session 30 logged out. Waiting for processes to exit. Mar 4 01:04:13.432275 systemd-logind[1456]: Removed session 30. Mar 4 01:04:14.933872 kubelet[2614]: E0304 01:04:14.931572 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:14.933872 kubelet[2614]: E0304 01:04:14.932647 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:18.413635 systemd[1]: Started sshd@30-10.0.0.41:22-10.0.0.1:42834.service - OpenSSH per-connection server daemon (10.0.0.1:42834). Mar 4 01:04:18.549046 sshd[6788]: Accepted publickey for core from 10.0.0.1 port 42834 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:18.552615 sshd[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:18.562423 systemd-logind[1456]: New session 31 of user core. Mar 4 01:04:18.573604 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 4 01:04:18.806203 sshd[6788]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:18.811556 systemd[1]: sshd@30-10.0.0.41:22-10.0.0.1:42834.service: Deactivated successfully. Mar 4 01:04:18.814387 systemd[1]: session-31.scope: Deactivated successfully. Mar 4 01:04:18.815772 systemd-logind[1456]: Session 31 logged out. Waiting for processes to exit. Mar 4 01:04:18.817722 systemd-logind[1456]: Removed session 31.