Mar 2 12:54:15.996141 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 12:54:15.996198 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:15.996220 kernel: BIOS-provided physical RAM map: Mar 2 12:54:15.996232 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 12:54:15.996352 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 12:54:15.996365 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 12:54:15.996378 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 2 12:54:15.996388 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 2 12:54:15.996439 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 12:54:15.996456 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 12:54:15.996468 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 12:54:15.996477 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 12:54:15.996522 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 12:54:15.996533 kernel: NX (Execute Disable) protection: active Mar 2 12:54:15.996543 kernel: APIC: Static calls initialized Mar 2 12:54:15.996596 kernel: SMBIOS 2.8 present. Mar 2 12:54:15.996609 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 2 12:54:15.996621 kernel: Hypervisor detected: KVM Mar 2 12:54:15.996630 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 12:54:15.996640 kernel: kvm-clock: using sched offset of 15788184903 cycles Mar 2 12:54:15.996652 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 12:54:15.996664 kernel: tsc: Detected 2445.426 MHz processor Mar 2 12:54:15.996673 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 12:54:15.996685 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 12:54:15.996704 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 12:54:15.996716 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 12:54:15.996726 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 12:54:15.996737 kernel: Using GB pages for direct mapping Mar 2 12:54:15.996748 kernel: ACPI: Early table checksum verification disabled Mar 2 12:54:15.996761 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 2 12:54:15.996771 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:15.996781 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:15.996793 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:15.996808 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 2 12:54:15.996820 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:15.996830 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:15.996842 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:15.996853 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:15.996864 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 2 12:54:15.996875 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 2 12:54:15.996895 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 2 12:54:15.996910 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 2 12:54:15.996921 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 2 12:54:15.996934 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 2 12:54:15.996944 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 2 12:54:15.996956 kernel: No NUMA configuration found Mar 2 12:54:15.996968 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 2 12:54:15.996982 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 2 12:54:15.996989 kernel: Zone ranges: Mar 2 12:54:15.996996 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 12:54:15.997003 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 2 12:54:15.997009 kernel: Normal empty Mar 2 12:54:15.997016 kernel: Movable zone start for each node Mar 2 12:54:15.997022 kernel: Early memory node ranges Mar 2 12:54:15.997029 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 12:54:15.997036 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 2 12:54:15.997042 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 2 12:54:15.997060 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:54:15.997115 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 12:54:15.997130 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 2 12:54:15.997138 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 12:54:15.997145 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 12:54:15.997152 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 12:54:15.997158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 12:54:15.997165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 12:54:15.997172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 12:54:15.997184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 12:54:15.997190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 12:54:15.997197 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 12:54:15.997204 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 12:54:15.997211 kernel: TSC deadline timer available Mar 2 12:54:15.997217 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 12:54:15.997224 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 12:54:15.997231 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 12:54:15.997264 kernel: kvm-guest: setup PV sched yield Mar 2 12:54:15.997335 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 12:54:15.997344 kernel: Booting paravirtualized kernel on KVM Mar 2 12:54:15.997351 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 12:54:15.997358 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 12:54:15.997365 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 12:54:15.997372 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 12:54:15.997378 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 12:54:15.997385 kernel: kvm-guest: PV spinlocks enabled Mar 2 12:54:15.997391 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 12:54:15.997433 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:15.997440 kernel: random: crng init done Mar 2 12:54:15.997447 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 12:54:15.997454 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 12:54:15.997461 kernel: Fallback order for Node 0: 0 Mar 2 12:54:15.997468 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 2 12:54:15.997474 kernel: Policy zone: DMA32 Mar 2 12:54:15.997481 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 12:54:15.997492 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 2 12:54:15.997499 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 12:54:15.997505 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 12:54:15.997512 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 12:54:15.997519 kernel: Dynamic Preempt: voluntary Mar 2 12:54:15.997526 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 12:54:15.997534 kernel: rcu: RCU event tracing is enabled. Mar 2 12:54:15.997541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 12:54:15.997548 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 12:54:15.997558 kernel: Rude variant of Tasks RCU enabled. Mar 2 12:54:15.997565 kernel: Tracing variant of Tasks RCU enabled. Mar 2 12:54:15.997575 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 12:54:15.997587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 12:54:15.997633 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 12:54:15.997644 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 12:54:15.997656 kernel: Console: colour VGA+ 80x25 Mar 2 12:54:15.997668 kernel: printk: console [ttyS0] enabled Mar 2 12:54:15.997678 kernel: ACPI: Core revision 20230628 Mar 2 12:54:15.997691 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 12:54:15.997697 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 12:54:15.997704 kernel: x2apic enabled Mar 2 12:54:15.997711 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 12:54:15.997717 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 12:54:15.997724 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 12:54:15.997731 kernel: kvm-guest: setup PV IPIs Mar 2 12:54:15.997737 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 12:54:15.997757 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 12:54:15.997764 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 12:54:15.997771 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 12:54:15.997778 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 12:54:15.997788 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 12:54:15.997795 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 12:54:15.997802 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 12:54:15.997809 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 12:54:15.997819 kernel: Speculative Store Bypass: Vulnerable Mar 2 12:54:15.997826 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 12:54:15.997863 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 12:54:15.997872 kernel: active return thunk: srso_alias_return_thunk Mar 2 12:54:15.997879 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 12:54:15.997886 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 12:54:15.997893 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 12:54:15.997900 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 12:54:15.997907 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 12:54:15.997942 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 12:54:15.997951 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 12:54:15.997965 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 12:54:15.997977 kernel: Freeing SMP alternatives memory: 32K Mar 2 12:54:15.997986 kernel: pid_max: default: 32768 minimum: 301 Mar 2 12:54:15.997999 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 12:54:15.998012 kernel: landlock: Up and running. Mar 2 12:54:15.998022 kernel: SELinux: Initializing. Mar 2 12:54:15.998035 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:54:15.998053 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:54:15.998065 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 12:54:15.998078 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:15.998089 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:15.998101 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:15.998114 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 12:54:15.998125 kernel: signal: max sigframe size: 1776 Mar 2 12:54:15.998174 kernel: rcu: Hierarchical SRCU implementation. Mar 2 12:54:15.998189 kernel: rcu: Max phase no-delay instances is 400. Mar 2 12:54:15.998209 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 12:54:15.998216 kernel: smp: Bringing up secondary CPUs ... Mar 2 12:54:15.998223 kernel: smpboot: x86: Booting SMP configuration: Mar 2 12:54:15.998230 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 12:54:15.998237 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 12:54:15.998244 kernel: smpboot: Max logical packages: 1 Mar 2 12:54:15.998251 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 12:54:15.998258 kernel: devtmpfs: initialized Mar 2 12:54:15.998265 kernel: x86/mm: Memory block size: 128MB Mar 2 12:54:15.998330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 12:54:15.998340 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 12:54:15.998347 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 12:54:15.998354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 12:54:15.998361 kernel: audit: initializing netlink subsys (disabled) Mar 2 12:54:15.998368 kernel: audit: type=2000 audit(1772456049.750:1): state=initialized audit_enabled=0 res=1 Mar 2 12:54:15.998375 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 12:54:15.998382 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 12:54:15.998389 kernel: cpuidle: using governor menu Mar 2 12:54:15.998435 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 12:54:15.998442 kernel: dca service started, version 1.12.1 Mar 2 12:54:15.998449 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 12:54:15.998456 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 12:54:15.998463 kernel: PCI: Using configuration type 1 for base access Mar 2 12:54:15.998470 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 12:54:15.998477 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 12:54:15.998484 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 12:54:15.998491 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 12:54:15.998502 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 12:54:15.998509 kernel: ACPI: Added _OSI(Module Device) Mar 2 12:54:15.998516 kernel: ACPI: Added _OSI(Processor Device) Mar 2 12:54:15.998523 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 12:54:15.998530 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 12:54:15.998536 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 12:54:15.998543 kernel: ACPI: Interpreter enabled Mar 2 12:54:15.998550 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 12:54:15.998596 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 12:54:15.998617 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 12:54:15.998628 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 12:54:15.998638 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 12:54:15.998649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 12:54:15.999582 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 12:54:16.000028 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 12:54:16.000521 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 12:54:16.000541 kernel: PCI host bridge to bus 0000:00 Mar 2 12:54:16.000817 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 12:54:16.000999 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 12:54:16.001352 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 12:54:16.001596 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 12:54:16.001884 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 12:54:16.002125 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 2 12:54:16.002380 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 12:54:16.002751 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 12:54:16.002974 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 12:54:16.003177 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 2 12:54:16.003526 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 2 12:54:16.003750 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 2 12:54:16.003984 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 12:54:16.004443 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 12:54:16.004712 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 2 12:54:16.004948 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 2 12:54:16.005193 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 2 12:54:16.005671 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 12:54:16.005886 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 2 12:54:16.006107 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 2 12:54:16.006543 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 2 12:54:16.006880 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 12:54:16.007105 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 2 12:54:16.007428 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 2 12:54:16.007592 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 2 12:54:16.007739 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 2 12:54:16.008098 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 12:54:16.008508 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 12:54:16.008851 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 12:54:16.009073 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 2 12:54:16.009229 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 2 12:54:16.009685 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 12:54:16.009912 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 2 12:54:16.009939 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 12:54:16.009953 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 12:54:16.009965 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 12:54:16.009977 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 12:54:16.009990 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 12:54:16.010001 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 12:54:16.010013 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 12:54:16.010027 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 12:54:16.010045 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 12:54:16.010057 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 12:54:16.010069 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 12:54:16.010080 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 12:54:16.010092 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 12:54:16.010104 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 12:54:16.010117 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 12:54:16.010129 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 12:54:16.010141 kernel: iommu: Default domain type: Translated Mar 2 12:54:16.010157 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 12:54:16.010170 kernel: PCI: Using ACPI for IRQ routing Mar 2 12:54:16.010181 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 12:54:16.010195 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 12:54:16.010205 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 2 12:54:16.010570 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 12:54:16.010770 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 12:54:16.010922 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 12:54:16.010938 kernel: vgaarb: loaded Mar 2 12:54:16.010945 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 12:54:16.010952 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 12:54:16.010959 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 12:54:16.010967 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 12:54:16.010974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 12:54:16.010981 kernel: pnp: PnP ACPI init Mar 2 12:54:16.011352 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 12:54:16.011367 kernel: pnp: PnP ACPI: found 6 devices Mar 2 12:54:16.011381 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 12:54:16.011388 kernel: NET: Registered PF_INET protocol family Mar 2 12:54:16.011424 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 12:54:16.011432 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 12:54:16.011439 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 12:54:16.011446 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 12:54:16.011453 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 12:54:16.011460 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 12:54:16.011467 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:54:16.011479 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:54:16.011486 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 12:54:16.011493 kernel: NET: Registered PF_XDP protocol family Mar 2 12:54:16.011643 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 12:54:16.011779 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 12:54:16.011911 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 12:54:16.012050 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 12:54:16.012181 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 12:54:16.012381 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 2 12:54:16.012392 kernel: PCI: CLS 0 bytes, default 64 Mar 2 12:54:16.012433 kernel: Initialise system trusted keyrings Mar 2 12:54:16.012440 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 12:54:16.012447 kernel: Key type asymmetric registered Mar 2 12:54:16.012454 kernel: Asymmetric key parser 'x509' registered Mar 2 12:54:16.012461 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 12:54:16.012468 kernel: io scheduler mq-deadline registered Mar 2 12:54:16.012475 kernel: io scheduler kyber registered Mar 2 12:54:16.012487 kernel: io scheduler bfq registered Mar 2 12:54:16.012494 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 12:54:16.012503 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 12:54:16.012510 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 12:54:16.012517 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 12:54:16.012524 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 12:54:16.012531 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 12:54:16.012538 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 12:54:16.012545 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 12:54:16.012555 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 12:54:16.012870 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 12:54:16.012883 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 12:54:16.013085 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 12:54:16.013226 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T12:54:14 UTC (1772456054) Mar 2 12:54:16.013492 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 12:54:16.013504 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 12:54:16.013512 kernel: NET: Registered PF_INET6 protocol family Mar 2 12:54:16.013525 kernel: Segment Routing with IPv6 Mar 2 12:54:16.013532 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 12:54:16.013539 kernel: NET: Registered PF_PACKET protocol family Mar 2 12:54:16.013546 kernel: Key type dns_resolver registered Mar 2 12:54:16.013553 kernel: IPI shorthand broadcast: enabled Mar 2 12:54:16.013560 kernel: sched_clock: Marking stable (3819025415, 803646105)->(5347746916, -725075396) Mar 2 12:54:16.013567 kernel: registered taskstats version 1 Mar 2 12:54:16.013575 kernel: Loading compiled-in X.509 certificates Mar 2 12:54:16.013588 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 12:54:16.013608 kernel: Key type .fscrypt registered Mar 2 12:54:16.013618 kernel: Key type fscrypt-provisioning registered Mar 2 12:54:16.013629 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 12:54:16.013643 kernel: ima: Allocated hash algorithm: sha1 Mar 2 12:54:16.013654 kernel: ima: No architecture policies found Mar 2 12:54:16.013667 kernel: clk: Disabling unused clocks Mar 2 12:54:16.013678 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 12:54:16.013691 kernel: Write protecting the kernel read-only data: 36864k Mar 2 12:54:16.013704 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 12:54:16.013721 kernel: Run /init as init process Mar 2 12:54:16.013735 kernel: with arguments: Mar 2 12:54:16.013747 kernel: /init Mar 2 12:54:16.013759 kernel: with environment: Mar 2 12:54:16.013768 kernel: HOME=/ Mar 2 12:54:16.013775 kernel: TERM=linux Mar 2 12:54:16.013784 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:54:16.013794 systemd[1]: Detected virtualization kvm. Mar 2 12:54:16.013807 systemd[1]: Detected architecture x86-64. Mar 2 12:54:16.013814 systemd[1]: Running in initrd. Mar 2 12:54:16.013822 systemd[1]: No hostname configured, using default hostname. Mar 2 12:54:16.013829 systemd[1]: Hostname set to . Mar 2 12:54:16.013837 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:54:16.013844 systemd[1]: Queued start job for default target initrd.target. Mar 2 12:54:16.013852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:16.013859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:16.013871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 12:54:16.013878 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:54:16.013886 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 12:54:16.013893 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 12:54:16.013902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 12:54:16.013910 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 12:54:16.013921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:16.013929 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:16.013936 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:54:16.013943 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:54:16.013951 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:54:16.013975 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:54:16.013995 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:54:16.014011 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:54:16.014026 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 12:54:16.014040 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 12:54:16.014053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:16.014073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:16.014082 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:16.014090 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:54:16.014097 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 12:54:16.014108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:54:16.014116 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 12:54:16.014124 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 12:54:16.014137 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:54:16.014151 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:54:16.014163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:16.014176 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 12:54:16.014225 systemd-journald[195]: Collecting audit messages is disabled. Mar 2 12:54:16.014252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:16.014260 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 12:54:16.014271 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:54:16.014351 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:16.014368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:54:16.014382 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:16.014392 systemd-journald[195]: Journal started Mar 2 12:54:16.014465 systemd-journald[195]: Runtime Journal (/run/log/journal/6691ec029c354a68ba09c5d2b8f988a6) is 6.0M, max 48.4M, 42.3M free. Mar 2 12:54:16.002751 systemd-modules-load[196]: Inserted module 'overlay' Mar 2 12:54:16.041895 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:54:16.057858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:54:16.266365 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 12:54:16.266455 kernel: Bridge firewalling registered Mar 2 12:54:16.069461 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 2 12:54:16.276675 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:16.278384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:16.305618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:16.309665 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:54:16.327928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:54:16.362124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:16.374614 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:54:16.388240 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:16.393664 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 12:54:16.423627 dracut-cmdline[232]: dracut-dracut-053 Mar 2 12:54:16.431992 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:16.433369 systemd-resolved[226]: Positive Trust Anchors: Mar 2 12:54:16.433386 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:54:16.433477 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:54:16.437990 systemd-resolved[226]: Defaulting to hostname 'linux'. Mar 2 12:54:16.441355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:54:16.459671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:54:16.810880 kernel: SCSI subsystem initialized Mar 2 12:54:16.861518 kernel: Loading iSCSI transport class v2.0-870. Mar 2 12:54:16.900648 kernel: iscsi: registered transport (tcp) Mar 2 12:54:16.984928 kernel: hrtimer: interrupt took 22599453 ns Mar 2 12:54:17.067722 kernel: iscsi: registered transport (qla4xxx) Mar 2 12:54:17.068006 kernel: QLogic iSCSI HBA Driver Mar 2 12:54:17.177740 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 12:54:17.201999 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 12:54:17.286110 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 12:54:17.286206 kernel: device-mapper: uevent: version 1.0.3 Mar 2 12:54:17.292673 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 12:54:17.392495 kernel: raid6: avx2x4 gen() 21312 MB/s Mar 2 12:54:17.408439 kernel: raid6: avx2x2 gen() 27382 MB/s Mar 2 12:54:17.437624 kernel: raid6: avx2x1 gen() 10075 MB/s Mar 2 12:54:17.438009 kernel: raid6: using algorithm avx2x2 gen() 27382 MB/s Mar 2 12:54:17.461174 kernel: raid6: .... xor() 12708 MB/s, rmw enabled Mar 2 12:54:17.461639 kernel: raid6: using avx2x2 recovery algorithm Mar 2 12:54:17.499598 kernel: xor: automatically using best checksumming function avx Mar 2 12:54:17.767732 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 12:54:17.792710 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:54:17.810878 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:17.846557 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 2 12:54:17.855874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:17.881559 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 12:54:17.910091 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 2 12:54:17.972960 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:54:17.989615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:54:18.112067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:18.129579 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 12:54:18.156491 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 12:54:18.160894 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:54:18.171163 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:18.174985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:54:18.276187 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 12:54:18.300386 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 12:54:18.306782 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 12:54:18.309962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:54:18.350506 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 12:54:18.310193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:18.375920 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 12:54:18.375998 kernel: GPT:9289727 != 19775487 Mar 2 12:54:18.376041 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 12:54:18.376057 kernel: GPT:9289727 != 19775487 Mar 2 12:54:18.376072 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 12:54:18.376087 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:18.331228 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:18.335226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:54:18.335490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:18.336034 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:18.372740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:18.378881 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:54:18.416983 kernel: libata version 3.00 loaded. Mar 2 12:54:18.437709 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 12:54:18.445265 kernel: AES CTR mode by8 optimization enabled Mar 2 12:54:18.445468 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 12:54:18.445812 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 12:54:18.463396 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 12:54:18.463887 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 12:54:18.476587 kernel: scsi host0: ahci Mar 2 12:54:19.201681 kernel: scsi host1: ahci Mar 2 12:54:19.210475 kernel: scsi host2: ahci Mar 2 12:54:19.228919 kernel: scsi host3: ahci Mar 2 12:54:19.244709 kernel: scsi host4: ahci Mar 2 12:54:19.246394 kernel: scsi host5: ahci Mar 2 12:54:19.258461 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 2 12:54:19.258552 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 2 12:54:19.258574 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 2 12:54:19.258592 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 2 12:54:19.258610 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 2 12:54:19.258628 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 2 12:54:19.258644 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Mar 2 12:54:19.271220 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 12:54:19.711803 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (476) Mar 2 12:54:19.711851 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:19.711870 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:19.711888 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 12:54:19.711905 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 12:54:19.711922 kernel: ata3.00: applying bridge limits Mar 2 12:54:19.711940 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:19.711956 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:19.711973 kernel: ata3.00: configured for UDMA/100 Mar 2 12:54:19.711994 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:19.712011 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 12:54:19.736688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:19.765028 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:54:19.786362 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 12:54:19.798045 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 12:54:19.839405 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 12:54:19.839845 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 12:54:19.804126 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 12:54:19.853395 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 12:54:19.863615 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:19.874793 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 12:54:19.886149 disk-uuid[568]: Primary Header is updated. Mar 2 12:54:19.886149 disk-uuid[568]: Secondary Entries is updated. Mar 2 12:54:19.886149 disk-uuid[568]: Secondary Header is updated. Mar 2 12:54:19.904870 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:19.914973 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:19.940794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:19.955373 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:21.005566 disk-uuid[570]: Warning: The kernel is still using the old partition table. Mar 2 12:54:21.005566 disk-uuid[570]: The new table will be used at the next reboot or after you Mar 2 12:54:21.005566 disk-uuid[570]: run partprobe(8) or kpartx(8) Mar 2 12:54:21.005566 disk-uuid[570]: The operation has completed successfully. Mar 2 12:54:21.782973 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 12:54:21.783258 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 12:54:21.838203 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 12:54:21.860627 sh[593]: Success Mar 2 12:54:21.921434 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 12:54:22.101415 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 12:54:22.120259 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 12:54:22.146787 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 12:54:22.190528 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 12:54:22.190618 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:22.190638 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 12:54:22.195840 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 12:54:22.204345 kernel: BTRFS info (device dm-0): using free space tree Mar 2 12:54:22.265782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 12:54:22.273526 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 12:54:22.296855 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 12:54:22.309987 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 12:54:22.351707 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:22.351840 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:22.351864 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:22.375389 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:22.396123 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 12:54:22.408035 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:22.460777 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 12:54:22.489881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 12:54:22.809395 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:54:22.848718 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:54:22.849403 ignition[695]: Ignition 2.19.0 Mar 2 12:54:22.849415 ignition[695]: Stage: fetch-offline Mar 2 12:54:22.849521 ignition[695]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:22.849535 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:22.849632 ignition[695]: parsed url from cmdline: "" Mar 2 12:54:22.849637 ignition[695]: no config URL provided Mar 2 12:54:22.849643 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 12:54:22.849654 ignition[695]: no config at "/usr/lib/ignition/user.ign" Mar 2 12:54:22.849688 ignition[695]: op(1): [started] loading QEMU firmware config module Mar 2 12:54:22.849693 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 12:54:22.896496 ignition[695]: op(1): [finished] loading QEMU firmware config module Mar 2 12:54:22.902368 systemd-networkd[779]: lo: Link UP Mar 2 12:54:22.902418 systemd-networkd[779]: lo: Gained carrier Mar 2 12:54:22.909380 systemd-networkd[779]: Enumeration completed Mar 2 12:54:22.910580 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:54:22.912781 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:22.912788 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:54:22.917237 systemd-networkd[779]: eth0: Link UP Mar 2 12:54:22.917243 systemd-networkd[779]: eth0: Gained carrier Mar 2 12:54:22.917251 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:22.919056 systemd[1]: Reached target network.target - Network. Mar 2 12:54:23.477727 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:54:23.655418 ignition[695]: parsing config with SHA512: 6f680708fa5490fdba9dddec23ab326da96e48fb5a0291f399d391294653e4cdf7b12ebf011fcde87cb531498e43e99770b9583d94477f1858eb0e808c77785c Mar 2 12:54:23.686371 unknown[695]: fetched base config from "system" Mar 2 12:54:23.686421 unknown[695]: fetched user config from "qemu" Mar 2 12:54:23.695379 ignition[695]: fetch-offline: fetch-offline passed Mar 2 12:54:23.698901 ignition[695]: Ignition finished successfully Mar 2 12:54:23.704875 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:54:23.715021 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 12:54:23.760559 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 12:54:24.209382 ignition[785]: Ignition 2.19.0 Mar 2 12:54:24.209439 ignition[785]: Stage: kargs Mar 2 12:54:24.212815 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:24.215419 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:24.216839 ignition[785]: kargs: kargs passed Mar 2 12:54:24.232962 ignition[785]: Ignition finished successfully Mar 2 12:54:24.243162 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 12:54:24.259677 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 12:54:24.361415 ignition[794]: Ignition 2.19.0 Mar 2 12:54:24.361508 ignition[794]: Stage: disks Mar 2 12:54:24.361988 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:24.362009 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:24.383078 ignition[794]: disks: disks passed Mar 2 12:54:24.383160 ignition[794]: Ignition finished successfully Mar 2 12:54:24.388894 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 12:54:24.408856 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 12:54:24.412266 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 12:54:24.453051 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:54:24.460969 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:54:24.481360 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:54:24.493715 systemd-networkd[779]: eth0: Gained IPv6LL Mar 2 12:54:24.509770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 12:54:24.589561 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 12:54:24.600745 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 12:54:24.629073 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 12:54:25.087789 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 12:54:25.094074 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 12:54:25.102828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 12:54:25.130989 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:54:25.138772 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 12:54:25.147229 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 12:54:25.184428 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 2 12:54:25.184539 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:25.184566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:25.184587 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:25.147363 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 12:54:25.206224 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:25.147573 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:54:25.188190 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 12:54:25.213129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:54:25.242015 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 12:54:25.436121 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 12:54:25.444003 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 2 12:54:25.455934 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 12:54:25.468609 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 12:54:25.969743 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 12:54:25.991779 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 12:54:26.003820 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 12:54:26.035022 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 12:54:26.053114 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:26.115797 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 12:54:26.391054 ignition[926]: INFO : Ignition 2.19.0 Mar 2 12:54:26.391054 ignition[926]: INFO : Stage: mount Mar 2 12:54:26.400240 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:26.400240 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:26.400240 ignition[926]: INFO : mount: mount passed Mar 2 12:54:26.400240 ignition[926]: INFO : Ignition finished successfully Mar 2 12:54:26.407072 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 12:54:26.442695 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 12:54:26.465532 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:54:26.560145 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 2 12:54:26.560212 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:26.568598 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:26.568660 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:26.596082 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:26.602814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:54:26.811797 ignition[957]: INFO : Ignition 2.19.0 Mar 2 12:54:26.811797 ignition[957]: INFO : Stage: files Mar 2 12:54:26.811797 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:26.811797 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:26.843140 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 2 12:54:26.849727 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 12:54:26.849727 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 12:54:26.875270 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 12:54:26.889041 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 12:54:26.904763 unknown[957]: wrote ssh authorized keys file for user: core Mar 2 12:54:26.917867 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 12:54:26.926171 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:54:26.933685 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 12:54:27.029075 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 12:54:27.431865 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:54:27.431865 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:54:27.475854 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 2 12:54:27.938026 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 12:54:31.697179 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:54:31.697179 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 12:54:31.719613 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 12:54:31.905714 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:54:31.940012 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:54:31.960403 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 12:54:31.960403 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 12:54:31.960403 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 12:54:31.960403 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:54:31.960403 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:54:31.960403 ignition[957]: INFO : files: files passed Mar 2 12:54:31.960403 ignition[957]: INFO : Ignition finished successfully Mar 2 12:54:31.950111 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 12:54:32.000975 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 12:54:32.014652 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 12:54:32.043272 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 12:54:32.043698 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 12:54:32.063420 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 12:54:32.075803 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:32.089185 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:32.098881 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:32.106240 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:54:32.140501 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 12:54:32.186654 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 12:54:32.301566 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 12:54:32.304910 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 12:54:32.312845 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 12:54:32.333050 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 12:54:32.336715 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 12:54:32.372222 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 12:54:32.593576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:54:32.624909 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 12:54:32.650832 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:54:32.654792 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:32.660781 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 12:54:32.684869 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 12:54:32.685637 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:54:32.698451 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 12:54:32.708014 systemd[1]: Stopped target basic.target - Basic System. Mar 2 12:54:32.783231 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 12:54:32.806369 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:54:32.852497 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 12:54:32.853251 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 12:54:32.939502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:54:32.972045 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 12:54:32.987215 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 12:54:32.995007 systemd[1]: Stopped target swap.target - Swaps. Mar 2 12:54:33.009441 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 12:54:33.011485 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:54:33.034875 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:33.058187 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:33.062470 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 12:54:33.075396 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:33.092378 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 12:54:33.095799 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 12:54:33.101707 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 12:54:33.101901 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:54:33.105217 systemd[1]: Stopped target paths.target - Path Units. Mar 2 12:54:33.112583 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 12:54:33.114506 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:33.172780 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 12:54:33.180711 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 12:54:33.189262 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 12:54:33.189668 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:54:33.202706 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 12:54:33.206349 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:54:33.216986 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 12:54:33.243092 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:54:33.256257 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 12:54:33.257470 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 12:54:33.297165 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 12:54:33.314727 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 12:54:33.326094 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 12:54:33.326872 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:33.349485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 12:54:33.351072 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:54:33.369917 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 12:54:33.370098 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 12:54:33.390657 ignition[1012]: INFO : Ignition 2.19.0 Mar 2 12:54:33.390657 ignition[1012]: INFO : Stage: umount Mar 2 12:54:33.390657 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:33.390657 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:33.405412 ignition[1012]: INFO : umount: umount passed Mar 2 12:54:33.405412 ignition[1012]: INFO : Ignition finished successfully Mar 2 12:54:33.402423 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 12:54:33.402769 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 12:54:33.416010 systemd[1]: Stopped target network.target - Network. Mar 2 12:54:33.433170 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 12:54:33.433693 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 12:54:33.443016 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 12:54:33.443126 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 12:54:33.449373 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 12:54:33.449464 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 12:54:33.461400 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 12:54:33.462163 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 12:54:33.464659 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 12:54:33.475725 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 12:54:33.493916 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 12:54:33.497988 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 12:54:33.510957 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 12:54:33.511078 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:54:33.527782 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 2 12:54:33.552849 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 12:54:33.553449 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 12:54:33.564170 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 12:54:33.564348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:33.606934 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 12:54:33.627107 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 12:54:33.631663 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:54:33.685606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 12:54:33.685817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:33.703495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 12:54:33.703662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:33.752496 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:33.766204 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 12:54:33.769740 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 12:54:33.769928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 12:54:33.830987 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 12:54:33.831137 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 12:54:33.840376 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 12:54:33.840689 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:33.848241 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 12:54:33.848448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:33.880636 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 12:54:33.880726 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:33.902345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 12:54:33.902460 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:54:33.932352 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 12:54:33.932460 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 12:54:33.946658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:54:33.946772 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:34.010779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 12:54:34.011761 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 12:54:34.011854 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:34.038633 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 12:54:34.038739 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:34.109026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 12:54:34.109178 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:34.124482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:54:34.124646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:34.211070 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 12:54:34.211511 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 12:54:34.246201 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 12:54:34.247489 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 12:54:34.269692 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 12:54:34.299674 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 12:54:34.326002 systemd[1]: Switching root. Mar 2 12:54:34.374614 systemd-journald[195]: Journal stopped Mar 2 12:54:38.286668 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 2 12:54:38.286788 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 12:54:38.286833 kernel: SELinux: policy capability open_perms=1 Mar 2 12:54:38.286901 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 12:54:38.286934 kernel: SELinux: policy capability always_check_network=0 Mar 2 12:54:38.286955 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 12:54:38.286982 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 12:54:38.287005 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 12:54:38.287024 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 12:54:38.287041 kernel: audit: type=1403 audit(1772456074.719:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 12:54:38.287069 systemd[1]: Successfully loaded SELinux policy in 111.867ms. Mar 2 12:54:38.287101 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.089ms. Mar 2 12:54:38.287125 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:54:38.287148 systemd[1]: Detected virtualization kvm. Mar 2 12:54:38.287170 systemd[1]: Detected architecture x86-64. Mar 2 12:54:38.287430 systemd[1]: Detected first boot. Mar 2 12:54:38.287457 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:54:38.287493 zram_generator::config[1056]: No configuration found. Mar 2 12:54:38.287518 systemd[1]: Populated /etc with preset unit settings. Mar 2 12:54:38.287547 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 12:54:38.287615 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 12:54:38.287641 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 12:54:38.287665 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 12:54:38.287689 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 12:54:38.287711 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 12:54:38.287733 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 12:54:38.287755 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 12:54:38.287778 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 12:54:38.287807 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 12:54:38.287829 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 12:54:38.287851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:38.288110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:38.288134 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 12:54:38.288153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 12:54:38.288172 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 12:54:38.288192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:54:38.288266 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 12:54:38.288351 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:38.288372 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 12:54:38.288391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 12:54:38.288409 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 12:54:38.288429 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 12:54:38.288448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:38.288467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:54:38.288494 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:54:38.288517 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:54:38.288538 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 12:54:38.288607 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 12:54:38.288630 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:38.288649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:38.288666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:38.288684 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 12:54:38.288710 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 12:54:38.288737 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 12:54:38.288755 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 12:54:38.288772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:38.288789 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 12:54:38.289648 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 12:54:38.289670 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 12:54:38.289692 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 12:54:38.289712 systemd[1]: Reached target machines.target - Containers. Mar 2 12:54:38.289728 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 12:54:38.289848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:54:38.290017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:54:38.290037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 12:54:38.290054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:54:38.290070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:54:38.290090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:54:38.290107 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 12:54:38.290124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:54:38.290146 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 12:54:38.290164 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 12:54:38.290180 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 12:54:38.290203 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 12:54:38.290220 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 12:54:38.290237 kernel: fuse: init (API version 7.39) Mar 2 12:54:38.290253 kernel: loop: module loaded Mar 2 12:54:38.290273 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:54:38.290355 kernel: ACPI: bus type drm_connector registered Mar 2 12:54:38.290380 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:54:38.290398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:54:38.290415 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 12:54:38.290431 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:54:38.290496 systemd-journald[1140]: Collecting audit messages is disabled. Mar 2 12:54:38.290602 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 12:54:38.290631 systemd-journald[1140]: Journal started Mar 2 12:54:38.291068 systemd-journald[1140]: Runtime Journal (/run/log/journal/6691ec029c354a68ba09c5d2b8f988a6) is 6.0M, max 48.4M, 42.3M free. Mar 2 12:54:36.901504 systemd[1]: Queued start job for default target multi-user.target. Mar 2 12:54:36.968701 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 12:54:36.970974 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 12:54:36.971676 systemd[1]: systemd-journald.service: Consumed 2.846s CPU time. Mar 2 12:54:38.296725 systemd[1]: Stopped verity-setup.service. Mar 2 12:54:38.311416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:38.320822 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:54:38.333947 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 12:54:38.340260 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 12:54:38.347010 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 12:54:38.354878 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 12:54:38.363536 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 12:54:38.371129 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 12:54:38.417262 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 12:54:38.427985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:38.437074 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 12:54:38.437967 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 12:54:38.448803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:54:38.449148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:54:38.455391 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:54:38.455742 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:54:38.465860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:54:38.466124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:54:38.471177 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 12:54:38.471898 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 12:54:38.476935 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:54:38.477271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:54:38.482972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:38.488490 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:54:38.495204 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 12:54:38.538745 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:54:38.559800 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 12:54:38.571693 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 12:54:38.579118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 12:54:38.579882 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:54:38.631401 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 12:54:38.676693 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 12:54:38.701870 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 12:54:38.706948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:54:38.712170 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 12:54:38.757007 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 12:54:38.773630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:54:38.842158 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 12:54:38.855914 systemd-journald[1140]: Time spent on flushing to /var/log/journal/6691ec029c354a68ba09c5d2b8f988a6 is 47.541ms for 943 entries. Mar 2 12:54:38.855914 systemd-journald[1140]: System Journal (/var/log/journal/6691ec029c354a68ba09c5d2b8f988a6) is 8.0M, max 195.6M, 187.6M free. Mar 2 12:54:39.166857 systemd-journald[1140]: Received client request to flush runtime journal. Mar 2 12:54:39.166990 kernel: loop0: detected capacity change from 0 to 142488 Mar 2 12:54:39.167234 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 12:54:38.853995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:54:38.875832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:54:38.906535 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 12:54:38.941171 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:54:38.959704 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:39.032009 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 12:54:39.186455 kernel: loop1: detected capacity change from 0 to 219192 Mar 2 12:54:39.042830 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 12:54:39.050243 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 12:54:39.057474 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 12:54:39.090784 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 12:54:39.109934 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 12:54:39.140941 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 12:54:39.174101 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 12:54:39.276348 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 12:54:39.336686 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 12:54:39.360959 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:39.380681 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 12:54:39.403158 kernel: loop2: detected capacity change from 0 to 140768 Mar 2 12:54:39.518115 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 2 12:54:39.533754 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 2 12:54:39.613975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:39.654384 kernel: loop3: detected capacity change from 0 to 142488 Mar 2 12:54:39.663713 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 12:54:39.740854 kernel: loop4: detected capacity change from 0 to 219192 Mar 2 12:54:39.787040 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 12:54:39.810693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:54:39.861367 kernel: loop5: detected capacity change from 0 to 140768 Mar 2 12:54:39.943083 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 12:54:39.944238 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 2 12:54:39.953829 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 12:54:39.953976 systemd[1]: Reloading... Mar 2 12:54:40.006871 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 2 12:54:40.007717 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 2 12:54:40.303555 zram_generator::config[1219]: No configuration found. Mar 2 12:54:40.740196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:54:40.986750 systemd[1]: Reloading finished in 1031 ms. Mar 2 12:54:41.100870 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 12:54:41.163261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:41.216770 systemd[1]: Starting ensure-sysext.service... Mar 2 12:54:41.241643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:54:41.252016 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 2 12:54:41.252036 systemd[1]: Reloading... Mar 2 12:54:41.274000 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 12:54:41.792808 zram_generator::config[1288]: No configuration found. Mar 2 12:54:41.806242 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 12:54:41.807773 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 12:54:41.809788 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 12:54:41.811538 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 2 12:54:41.811797 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 2 12:54:41.858565 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:54:41.859046 systemd-tmpfiles[1261]: Skipping /boot Mar 2 12:54:42.249251 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:54:42.250000 systemd-tmpfiles[1261]: Skipping /boot Mar 2 12:54:42.456152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:54:42.525520 systemd[1]: Reloading finished in 1272 ms. Mar 2 12:54:42.649095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 12:54:42.656827 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 12:54:42.685945 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:54:42.748349 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:54:42.769103 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 12:54:42.799989 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 12:54:42.854247 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:54:42.904991 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:43.031180 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 12:54:43.040429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:43.040766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:54:43.048371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:54:43.061667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:54:43.077951 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:54:43.087502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:54:43.097728 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Mar 2 12:54:43.113170 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 12:54:43.136017 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:43.146756 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 12:54:43.155054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:54:43.155562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:54:43.165847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:54:43.166145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:54:43.169114 augenrules[1353]: No rules Mar 2 12:54:43.177868 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:54:43.185059 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:54:43.185395 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:54:43.201656 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:43.219027 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 12:54:43.308750 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 12:54:43.357131 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:43.357565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:54:43.403560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:54:43.413909 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:54:43.424545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:54:43.459217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:54:43.487001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:54:43.505644 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:54:43.514352 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 12:54:43.518127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:43.519388 systemd[1]: Finished ensure-sysext.service. Mar 2 12:54:43.530261 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 12:54:43.540119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:54:43.540562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:54:43.547106 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:54:43.547859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:54:43.553700 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:54:43.554019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:54:43.560431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:54:43.561018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:54:43.584581 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 12:54:43.584805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:54:43.584913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:54:43.598677 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 12:54:43.605136 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 12:54:43.606569 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 12:54:43.931561 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1399) Mar 2 12:54:43.957829 systemd-resolved[1333]: Positive Trust Anchors: Mar 2 12:54:43.958369 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:54:43.958476 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:54:43.967579 systemd-resolved[1333]: Defaulting to hostname 'linux'. Mar 2 12:54:43.970142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:54:43.974583 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:54:44.030990 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 12:54:44.041064 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 12:54:44.045681 systemd-networkd[1386]: lo: Link UP Mar 2 12:54:44.045715 systemd-networkd[1386]: lo: Gained carrier Mar 2 12:54:44.051193 systemd-networkd[1386]: Enumeration completed Mar 2 12:54:44.051446 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:54:44.058905 systemd[1]: Reached target network.target - Network. Mar 2 12:54:44.067731 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 12:54:44.063457 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:44.063472 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:54:44.066099 systemd-networkd[1386]: eth0: Link UP Mar 2 12:54:44.066107 systemd-networkd[1386]: eth0: Gained carrier Mar 2 12:54:44.066135 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:44.097447 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:54:44.117219 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 12:54:44.131191 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 12:54:44.131835 kernel: ACPI: button: Power Button [PWRF] Mar 2 12:54:44.131854 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 12:54:44.098793 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 12:54:44.100821 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Mar 2 12:54:44.132927 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 12:54:44.133011 systemd-timesyncd[1398]: Initial clock synchronization to Mon 2026-03-02 12:54:44.131768 UTC. Mar 2 12:54:44.140058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:54:44.158735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 12:54:44.184407 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 12:54:44.197566 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 12:54:44.242482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:44.268433 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 12:54:44.817658 kernel: kvm_amd: TSC scaling supported Mar 2 12:54:44.832887 kernel: kvm_amd: Nested Virtualization enabled Mar 2 12:54:44.832946 kernel: kvm_amd: Nested Paging enabled Mar 2 12:54:44.832972 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 12:54:44.833021 kernel: kvm_amd: PMU virtualization is disabled Mar 2 12:54:44.948389 kernel: EDAC MC: Ver: 3.0.0 Mar 2 12:54:44.988154 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 12:54:45.121766 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:45.159656 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 12:54:45.180086 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:54:45.576872 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 12:54:45.582702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:45.587958 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:54:45.592905 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 12:54:45.598886 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 12:54:45.605149 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 12:54:45.609723 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 12:54:45.614375 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 12:54:45.618976 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 12:54:45.619067 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:54:45.639227 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:54:45.644569 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 12:54:45.651435 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 12:54:45.673102 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 12:54:45.691220 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 12:54:45.702769 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 12:54:45.711058 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:54:45.713704 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:54:45.746381 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:54:45.746662 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:54:45.762743 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 12:54:45.771385 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 12:54:45.775530 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:54:45.779540 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 12:54:45.788761 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 12:54:45.794401 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 12:54:45.797968 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 12:54:45.806672 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 12:54:45.807490 jq[1433]: false Mar 2 12:54:45.812725 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 12:54:45.820562 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 12:54:45.847574 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 12:54:45.852707 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 12:54:45.855668 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 12:54:45.865694 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 12:54:45.872904 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 12:54:45.881091 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 12:54:45.887827 dbus-daemon[1432]: [system] SELinux support is enabled Mar 2 12:54:45.891847 extend-filesystems[1434]: Found loop3 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found loop4 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found loop5 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found sr0 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda1 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda2 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda3 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found usr Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda4 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda6 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda7 Mar 2 12:54:45.891847 extend-filesystems[1434]: Found vda9 Mar 2 12:54:45.891847 extend-filesystems[1434]: Checking size of /dev/vda9 Mar 2 12:54:45.979073 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1368) Mar 2 12:54:45.891783 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 12:54:45.979794 update_engine[1442]: I20260302 12:54:45.919343 1442 main.cc:92] Flatcar Update Engine starting Mar 2 12:54:45.979794 update_engine[1442]: I20260302 12:54:45.922684 1442 update_check_scheduler.cc:74] Next update check in 8m53s Mar 2 12:54:45.908754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 12:54:45.980234 extend-filesystems[1434]: Resized partition /dev/vda9 Mar 2 12:54:45.985273 jq[1447]: true Mar 2 12:54:45.911190 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 12:54:45.941947 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 12:54:45.942242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 12:54:45.948042 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 12:54:45.948403 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 12:54:45.990250 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Mar 2 12:54:45.992626 systemd-networkd[1386]: eth0: Gained IPv6LL Mar 2 12:54:45.998972 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 12:54:46.011877 jq[1455]: true Mar 2 12:54:46.012827 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 12:54:46.024085 systemd[1]: Started update-engine.service - Update Engine. Mar 2 12:54:46.042717 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 12:54:46.042751 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 12:54:46.043697 systemd-logind[1440]: New seat seat0. Mar 2 12:54:46.050989 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 12:54:46.057971 tar[1454]: linux-amd64/LICENSE Mar 2 12:54:46.057971 tar[1454]: linux-amd64/helm Mar 2 12:54:46.059352 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 12:54:46.061362 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 12:54:46.074745 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 12:54:46.086793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:54:46.094478 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 12:54:46.102640 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 12:54:46.102863 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 12:54:46.114478 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 12:54:46.114743 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 12:54:46.313937 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 12:54:46.328487 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 12:54:46.373798 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 12:54:46.373798 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 12:54:46.373798 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 12:54:46.393791 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Mar 2 12:54:46.374458 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 12:54:46.374838 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 12:54:46.417351 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Mar 2 12:54:46.420116 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 12:54:46.425229 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 12:54:46.436884 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 12:54:46.437427 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 12:54:46.444616 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 12:54:46.457986 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 12:54:46.714350 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 12:54:47.668189 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 12:54:48.150713 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 12:54:49.055376 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 12:54:49.115065 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 12:54:49.118052 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 12:54:49.171067 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 12:54:49.855568 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 12:54:49.889090 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 12:54:49.898649 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 12:54:49.905087 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 12:54:49.911073 containerd[1470]: time="2026-03-02T12:54:49.910876226Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 12:54:49.998246 containerd[1470]: time="2026-03-02T12:54:49.998017458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:54:50.006933 containerd[1470]: time="2026-03-02T12:54:50.006869638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:54:50.007093 containerd[1470]: time="2026-03-02T12:54:50.007070478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 12:54:50.007172 containerd[1470]: time="2026-03-02T12:54:50.007154241Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 12:54:50.007744 containerd[1470]: time="2026-03-02T12:54:50.007715153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 12:54:50.007838 containerd[1470]: time="2026-03-02T12:54:50.007817251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 12:54:50.008080 containerd[1470]: time="2026-03-02T12:54:50.008052073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:54:50.008158 containerd[1470]: time="2026-03-02T12:54:50.008138171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:54:50.008668 containerd[1470]: time="2026-03-02T12:54:50.008637921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:54:50.008753 containerd[1470]: time="2026-03-02T12:54:50.008732615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 12:54:50.008836 containerd[1470]: time="2026-03-02T12:54:50.008814726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:54:50.008902 containerd[1470]: time="2026-03-02T12:54:50.008886117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 12:54:50.009368 containerd[1470]: time="2026-03-02T12:54:50.009247391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:54:50.009926 containerd[1470]: time="2026-03-02T12:54:50.009896335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:54:50.010238 containerd[1470]: time="2026-03-02T12:54:50.010209932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:54:50.010436 containerd[1470]: time="2026-03-02T12:54:50.010411823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 12:54:50.010869 containerd[1470]: time="2026-03-02T12:54:50.010843267Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 12:54:50.011017 containerd[1470]: time="2026-03-02T12:54:50.010997791Z" level=info msg="metadata content store policy set" policy=shared Mar 2 12:54:50.028078 containerd[1470]: time="2026-03-02T12:54:50.028014010Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 12:54:50.030902 containerd[1470]: time="2026-03-02T12:54:50.030858332Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 12:54:50.031174 containerd[1470]: time="2026-03-02T12:54:50.031153374Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 12:54:50.049898 containerd[1470]: time="2026-03-02T12:54:50.049857816Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 12:54:50.050785 containerd[1470]: time="2026-03-02T12:54:50.050752483Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 12:54:50.051565 containerd[1470]: time="2026-03-02T12:54:50.051532717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 12:54:50.052203 containerd[1470]: time="2026-03-02T12:54:50.052172193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 12:54:50.052902 containerd[1470]: time="2026-03-02T12:54:50.052874145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 12:54:50.052994 containerd[1470]: time="2026-03-02T12:54:50.052974930Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 12:54:50.053459 containerd[1470]: time="2026-03-02T12:54:50.053431630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 12:54:50.053689 containerd[1470]: time="2026-03-02T12:54:50.053666381Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.053767 containerd[1470]: time="2026-03-02T12:54:50.053747641Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.053847 containerd[1470]: time="2026-03-02T12:54:50.053829121Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.053921 containerd[1470]: time="2026-03-02T12:54:50.053905071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.053989 containerd[1470]: time="2026-03-02T12:54:50.053972475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.054220 containerd[1470]: time="2026-03-02T12:54:50.054198070Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.054434 containerd[1470]: time="2026-03-02T12:54:50.054408657Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.054598 containerd[1470]: time="2026-03-02T12:54:50.054574623Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 12:54:50.054754 containerd[1470]: time="2026-03-02T12:54:50.054731130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.054839 containerd[1470]: time="2026-03-02T12:54:50.054819874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.054932 containerd[1470]: time="2026-03-02T12:54:50.054914528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055152 containerd[1470]: time="2026-03-02T12:54:50.055122611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055238 containerd[1470]: time="2026-03-02T12:54:50.055220671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055410 containerd[1470]: time="2026-03-02T12:54:50.055388739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055557 containerd[1470]: time="2026-03-02T12:54:50.055476612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055655 containerd[1470]: time="2026-03-02T12:54:50.055633571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055735 containerd[1470]: time="2026-03-02T12:54:50.055717815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055826 containerd[1470]: time="2026-03-02T12:54:50.055807981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055894 containerd[1470]: time="2026-03-02T12:54:50.055877429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.055966 containerd[1470]: time="2026-03-02T12:54:50.055948920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.056188 containerd[1470]: time="2026-03-02T12:54:50.056166691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.056266 containerd[1470]: time="2026-03-02T12:54:50.056250214Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 12:54:50.056474 containerd[1470]: time="2026-03-02T12:54:50.056453428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.056693 containerd[1470]: time="2026-03-02T12:54:50.056671639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.056761 containerd[1470]: time="2026-03-02T12:54:50.056745265Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 12:54:50.057181 containerd[1470]: time="2026-03-02T12:54:50.056930345Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 12:54:50.057348 containerd[1470]: time="2026-03-02T12:54:50.057254912Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 12:54:50.057425 containerd[1470]: time="2026-03-02T12:54:50.057406170Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 12:54:50.057553 containerd[1470]: time="2026-03-02T12:54:50.057477201Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 12:54:50.057627 containerd[1470]: time="2026-03-02T12:54:50.057609766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.057807 containerd[1470]: time="2026-03-02T12:54:50.057782603Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 12:54:50.057888 containerd[1470]: time="2026-03-02T12:54:50.057869443Z" level=info msg="NRI interface is disabled by configuration." Mar 2 12:54:50.057958 containerd[1470]: time="2026-03-02T12:54:50.057941876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 12:54:50.058831 containerd[1470]: time="2026-03-02T12:54:50.058733432Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 12:54:50.059380 containerd[1470]: time="2026-03-02T12:54:50.059354494Z" level=info msg="Connect containerd service" Mar 2 12:54:50.059592 containerd[1470]: time="2026-03-02T12:54:50.059566764Z" level=info msg="using legacy CRI server" Mar 2 12:54:50.059724 containerd[1470]: time="2026-03-02T12:54:50.059704428Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 12:54:50.059934 containerd[1470]: time="2026-03-02T12:54:50.059912109Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 12:54:50.061922 containerd[1470]: time="2026-03-02T12:54:50.061883305Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 12:54:50.062425 containerd[1470]: time="2026-03-02T12:54:50.062387763Z" level=info msg="Start subscribing containerd event" Mar 2 12:54:50.063151 containerd[1470]: time="2026-03-02T12:54:50.063121873Z" level=info msg="Start recovering state" Mar 2 12:54:50.063808 containerd[1470]: time="2026-03-02T12:54:50.063782088Z" level=info msg="Start event monitor" Mar 2 12:54:50.064839 containerd[1470]: time="2026-03-02T12:54:50.063931713Z" level=info msg="Start snapshots syncer" Mar 2 12:54:50.064951 containerd[1470]: time="2026-03-02T12:54:50.064925871Z" level=info msg="Start cni network conf syncer for default" Mar 2 12:54:50.065030 containerd[1470]: time="2026-03-02T12:54:50.065011700Z" level=info msg="Start streaming server" Mar 2 12:54:50.065365 containerd[1470]: time="2026-03-02T12:54:50.064802975Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 12:54:50.065600 containerd[1470]: time="2026-03-02T12:54:50.065575666Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 12:54:50.066229 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 12:54:50.087885 containerd[1470]: time="2026-03-02T12:54:50.087608429Z" level=info msg="containerd successfully booted in 0.191292s" Mar 2 12:54:50.745245 tar[1454]: linux-amd64/README.md Mar 2 12:54:50.778592 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 12:54:55.707616 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 12:54:55.723137 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:40252.service - OpenSSH per-connection server daemon (10.0.0.1:40252). Mar 2 12:54:55.863752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:54:55.864546 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 12:54:55.865628 systemd[1]: Startup finished in 4.053s (kernel) + 19.792s (initrd) + 21.254s (userspace) = 45.100s. Mar 2 12:54:55.866194 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:54:57.582959 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 40252 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:54:57.589501 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:58.082972 systemd-logind[1440]: New session 1 of user core. Mar 2 12:54:58.085048 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 12:54:58.101934 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 12:54:58.170877 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 12:54:58.180901 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 12:54:58.245817 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 12:54:58.535440 systemd[1562]: Queued start job for default target default.target. Mar 2 12:54:58.554823 systemd[1562]: Created slice app.slice - User Application Slice. Mar 2 12:54:58.554903 systemd[1562]: Reached target paths.target - Paths. Mar 2 12:54:58.554925 systemd[1562]: Reached target timers.target - Timers. Mar 2 12:54:58.562022 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 12:54:58.610795 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 12:54:58.610999 systemd[1562]: Reached target sockets.target - Sockets. Mar 2 12:54:58.611021 systemd[1562]: Reached target basic.target - Basic System. Mar 2 12:54:58.611084 systemd[1562]: Reached target default.target - Main User Target. Mar 2 12:54:58.611145 systemd[1562]: Startup finished in 334ms. Mar 2 12:54:58.611609 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 12:54:58.619627 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 12:54:59.065943 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:58414.service - OpenSSH per-connection server daemon (10.0.0.1:58414). Mar 2 12:54:59.167167 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 58414 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:54:59.171659 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:59.182118 systemd-logind[1440]: New session 2 of user core. Mar 2 12:54:59.192961 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 12:54:59.796513 sshd[1575]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:59.809661 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:58414.service: Deactivated successfully. Mar 2 12:54:59.814173 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 12:54:59.818630 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Mar 2 12:54:59.852774 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). Mar 2 12:54:59.855945 systemd-logind[1440]: Removed session 2. Mar 2 12:54:59.862446 kubelet[1549]: E0302 12:54:59.862041 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:54:59.876980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:54:59.877420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:54:59.878138 systemd[1]: kubelet.service: Consumed 10.266s CPU time. Mar 2 12:54:59.924702 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:54:59.955078 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:59.968832 systemd-logind[1440]: New session 3 of user core. Mar 2 12:54:59.980737 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 12:55:00.064924 sshd[1583]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:00.079610 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:58430.service: Deactivated successfully. Mar 2 12:55:00.083058 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 12:55:00.087059 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Mar 2 12:55:00.106903 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:58436.service - OpenSSH per-connection server daemon (10.0.0.1:58436). Mar 2 12:55:00.109939 systemd-logind[1440]: Removed session 3. Mar 2 12:55:00.165673 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:00.169903 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:00.180246 systemd-logind[1440]: New session 4 of user core. Mar 2 12:55:00.190900 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 12:55:00.261879 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:00.278030 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:58436.service: Deactivated successfully. Mar 2 12:55:00.280846 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 12:55:00.283274 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Mar 2 12:55:00.295418 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:58450.service - OpenSSH per-connection server daemon (10.0.0.1:58450). Mar 2 12:55:00.297763 systemd-logind[1440]: Removed session 4. Mar 2 12:55:00.341989 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:00.344747 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:00.364230 systemd-logind[1440]: New session 5 of user core. Mar 2 12:55:00.375849 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 12:55:00.462888 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 12:55:00.463467 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:00.488948 sudo[1601]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:00.494008 sshd[1598]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:00.689638 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:58450.service: Deactivated successfully. Mar 2 12:55:00.712478 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 12:55:00.757621 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Mar 2 12:55:00.868574 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:58466.service - OpenSSH per-connection server daemon (10.0.0.1:58466). Mar 2 12:55:00.947915 systemd-logind[1440]: Removed session 5. Mar 2 12:55:01.008539 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 58466 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:01.011142 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:01.050128 systemd-logind[1440]: New session 6 of user core. Mar 2 12:55:01.065975 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 12:55:01.173894 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 12:55:01.174591 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:01.189086 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:01.348189 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 12:55:01.348990 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:01.396923 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:01.410684 auditctl[1613]: No rules Mar 2 12:55:01.411635 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:55:01.412043 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:01.428064 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:01.600583 augenrules[1631]: No rules Mar 2 12:55:01.606272 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:01.609427 sudo[1609]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:01.614550 sshd[1606]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:01.645535 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:58466.service: Deactivated successfully. Mar 2 12:55:01.648175 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 12:55:01.661155 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Mar 2 12:55:01.676095 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Mar 2 12:55:01.713893 systemd-logind[1440]: Removed session 6. Mar 2 12:55:01.940658 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:01.949601 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:01.974783 systemd-logind[1440]: New session 7 of user core. Mar 2 12:55:01.985110 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 12:55:02.089454 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 12:55:02.091011 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:06.033252 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 12:55:06.121897 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 12:55:09.247467 dockerd[1660]: time="2026-03-02T12:55:09.246024185Z" level=info msg="Starting up" Mar 2 12:55:09.826501 dockerd[1660]: time="2026-03-02T12:55:09.826170056Z" level=info msg="Loading containers: start." Mar 2 12:55:10.000140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 12:55:10.028753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:10.385019 kernel: Initializing XFRM netlink socket Mar 2 12:55:10.636096 systemd-networkd[1386]: docker0: Link UP Mar 2 12:55:10.747840 dockerd[1660]: time="2026-03-02T12:55:10.747688964Z" level=info msg="Loading containers: done." Mar 2 12:55:12.312112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:12.349419 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:12.484430 dockerd[1660]: time="2026-03-02T12:55:12.447132728Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 12:55:12.493197 dockerd[1660]: time="2026-03-02T12:55:12.492996092Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 12:55:12.493859 dockerd[1660]: time="2026-03-02T12:55:12.493547763Z" level=info msg="Daemon has completed initialization" Mar 2 12:55:12.606811 kubelet[1782]: E0302 12:55:12.606133 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:12.612822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:12.613580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:12.614361 systemd[1]: kubelet.service: Consumed 1.472s CPU time. Mar 2 12:55:12.630980 dockerd[1660]: time="2026-03-02T12:55:12.627164423Z" level=info msg="API listen on /run/docker.sock" Mar 2 12:55:12.628140 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 12:55:15.955938 containerd[1470]: time="2026-03-02T12:55:15.955109506Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 2 12:55:17.071694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704546827.mount: Deactivated successfully. Mar 2 12:55:20.782923 containerd[1470]: time="2026-03-02T12:55:20.782576902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:20.784643 containerd[1470]: time="2026-03-02T12:55:20.783541446Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 2 12:55:20.785441 containerd[1470]: time="2026-03-02T12:55:20.785369816Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:20.790479 containerd[1470]: time="2026-03-02T12:55:20.790387894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:20.792066 containerd[1470]: time="2026-03-02T12:55:20.791925832Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 4.836744183s" Mar 2 12:55:20.792066 containerd[1470]: time="2026-03-02T12:55:20.791999729Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 2 12:55:20.799447 containerd[1470]: time="2026-03-02T12:55:20.799362827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 2 12:55:22.754665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 12:55:22.803045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:23.333524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:23.340347 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:23.700983 kubelet[1899]: E0302 12:55:23.699909 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:23.707377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:23.707719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:24.692192 containerd[1470]: time="2026-03-02T12:55:24.692028497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:24.695482 containerd[1470]: time="2026-03-02T12:55:24.694689961Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 2 12:55:24.700373 containerd[1470]: time="2026-03-02T12:55:24.699966991Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:24.716272 containerd[1470]: time="2026-03-02T12:55:24.715138258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:24.716561 containerd[1470]: time="2026-03-02T12:55:24.716523845Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 3.917094015s" Mar 2 12:55:24.716615 containerd[1470]: time="2026-03-02T12:55:24.716569451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 2 12:55:24.727467 containerd[1470]: time="2026-03-02T12:55:24.726259117Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 2 12:55:28.794196 containerd[1470]: time="2026-03-02T12:55:28.793766624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:28.797634 containerd[1470]: time="2026-03-02T12:55:28.797529326Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 2 12:55:28.803004 containerd[1470]: time="2026-03-02T12:55:28.802864091Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:28.811213 containerd[1470]: time="2026-03-02T12:55:28.810903057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:28.816729 containerd[1470]: time="2026-03-02T12:55:28.815336927Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 4.08880885s" Mar 2 12:55:28.816729 containerd[1470]: time="2026-03-02T12:55:28.815417085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 2 12:55:28.860837 containerd[1470]: time="2026-03-02T12:55:28.857724051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 2 12:55:30.769474 update_engine[1442]: I20260302 12:55:30.768563 1442 update_attempter.cc:509] Updating boot flags... Mar 2 12:55:30.952348 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1923) Mar 2 12:55:31.129345 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1922) Mar 2 12:55:31.701771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192727206.mount: Deactivated successfully. Mar 2 12:55:33.120705 containerd[1470]: time="2026-03-02T12:55:33.119555945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:33.120705 containerd[1470]: time="2026-03-02T12:55:33.120015828Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 2 12:55:33.122678 containerd[1470]: time="2026-03-02T12:55:33.122209819Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:33.126494 containerd[1470]: time="2026-03-02T12:55:33.126415565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:33.127873 containerd[1470]: time="2026-03-02T12:55:33.127803792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 4.269394538s" Mar 2 12:55:33.127933 containerd[1470]: time="2026-03-02T12:55:33.127870847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 2 12:55:33.132463 containerd[1470]: time="2026-03-02T12:55:33.132336451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 2 12:55:33.767217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 12:55:33.812040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:34.438780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:34.508046 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:34.542966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240159231.mount: Deactivated successfully. Mar 2 12:55:34.903529 kubelet[1942]: E0302 12:55:34.902923 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:34.909343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:34.909655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:42.977985 containerd[1470]: time="2026-03-02T12:55:42.977103241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:42.985000 containerd[1470]: time="2026-03-02T12:55:42.983616758Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 2 12:55:42.985683 containerd[1470]: time="2026-03-02T12:55:42.985599809Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:43.001520 containerd[1470]: time="2026-03-02T12:55:43.001189082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:43.002466 containerd[1470]: time="2026-03-02T12:55:43.002382144Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 9.869986521s" Mar 2 12:55:43.002466 containerd[1470]: time="2026-03-02T12:55:43.002415155Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 2 12:55:43.015599 containerd[1470]: time="2026-03-02T12:55:43.014156551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 12:55:44.677762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708274431.mount: Deactivated successfully. Mar 2 12:55:44.711977 containerd[1470]: time="2026-03-02T12:55:44.709796456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:44.715621 containerd[1470]: time="2026-03-02T12:55:44.715417602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 12:55:44.724429 containerd[1470]: time="2026-03-02T12:55:44.724253269Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:44.733527 containerd[1470]: time="2026-03-02T12:55:44.732048191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:44.736707 containerd[1470]: time="2026-03-02T12:55:44.734776530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.720564815s" Mar 2 12:55:44.736707 containerd[1470]: time="2026-03-02T12:55:44.735000627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 12:55:44.745594 containerd[1470]: time="2026-03-02T12:55:44.745329896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 2 12:55:45.029752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 12:55:45.052914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:45.605257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:45.636605 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:45.917961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943525864.mount: Deactivated successfully. Mar 2 12:55:46.151632 kubelet[2014]: E0302 12:55:46.150455 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:46.158104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:46.158623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:56.274780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 12:55:56.295759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:57.385651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:57.416937 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:58.992240 kubelet[2087]: E0302 12:55:58.991911 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:59.001771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:59.002092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:59.003017 systemd[1]: kubelet.service: Consumed 3.403s CPU time. Mar 2 12:55:59.072492 containerd[1470]: time="2026-03-02T12:55:59.072413370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:59.080238 containerd[1470]: time="2026-03-02T12:55:59.080061182Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 2 12:55:59.086358 containerd[1470]: time="2026-03-02T12:55:59.083680797Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:59.091522 containerd[1470]: time="2026-03-02T12:55:59.091393486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:59.094043 containerd[1470]: time="2026-03-02T12:55:59.093415414Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 14.347933294s" Mar 2 12:55:59.094043 containerd[1470]: time="2026-03-02T12:55:59.093490064Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 2 12:56:06.153922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:06.155954 systemd[1]: kubelet.service: Consumed 3.403s CPU time. Mar 2 12:56:06.180271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:06.291168 systemd[1]: Reloading requested from client PID 2133 ('systemctl') (unit session-7.scope)... Mar 2 12:56:06.291274 systemd[1]: Reloading... Mar 2 12:56:06.615389 zram_generator::config[2175]: No configuration found. Mar 2 12:56:07.216840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:56:07.470899 systemd[1]: Reloading finished in 1178 ms. Mar 2 12:56:07.650595 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 12:56:07.650831 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 12:56:07.651447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:07.665648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:08.404963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:08.428099 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:56:09.291174 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:56:09.291174 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:56:09.291174 kubelet[2220]: I0302 12:56:09.290781 2220 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:56:11.351449 kubelet[2220]: I0302 12:56:11.348998 2220 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 12:56:11.351449 kubelet[2220]: I0302 12:56:11.349925 2220 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:56:11.358495 kubelet[2220]: I0302 12:56:11.354783 2220 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 12:56:11.358495 kubelet[2220]: I0302 12:56:11.354817 2220 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:56:11.358495 kubelet[2220]: I0302 12:56:11.356236 2220 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:56:11.475273 kubelet[2220]: E0302 12:56:11.473632 2220 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:56:11.475273 kubelet[2220]: I0302 12:56:11.476033 2220 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:56:11.534054 kubelet[2220]: E0302 12:56:11.531407 2220 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:56:11.534054 kubelet[2220]: I0302 12:56:11.531561 2220 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 12:56:11.595067 kubelet[2220]: I0302 12:56:11.592922 2220 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 12:56:11.608189 kubelet[2220]: I0302 12:56:11.607657 2220 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:56:11.611521 kubelet[2220]: I0302 12:56:11.607962 2220 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:56:11.611521 kubelet[2220]: I0302 12:56:11.610078 2220 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:56:11.611521 kubelet[2220]: I0302 12:56:11.610101 2220 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 12:56:11.611521 kubelet[2220]: I0302 12:56:11.610710 2220 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 12:56:11.647735 kubelet[2220]: I0302 12:56:11.643261 2220 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:56:11.649538 kubelet[2220]: I0302 12:56:11.649504 2220 kubelet.go:475] "Attempting to sync node with API server" Mar 2 12:56:11.653788 kubelet[2220]: I0302 12:56:11.649676 2220 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:56:11.653788 kubelet[2220]: I0302 12:56:11.649827 2220 kubelet.go:387] "Adding apiserver pod source" Mar 2 12:56:11.653788 kubelet[2220]: I0302 12:56:11.649925 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:56:11.658067 kubelet[2220]: E0302 12:56:11.657957 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:56:11.659263 kubelet[2220]: E0302 12:56:11.659173 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:56:11.665255 kubelet[2220]: I0302 12:56:11.665079 2220 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:56:11.666163 kubelet[2220]: I0302 12:56:11.666093 2220 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:56:11.666273 kubelet[2220]: I0302 12:56:11.666181 2220 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 12:56:11.668746 kubelet[2220]: W0302 12:56:11.666577 2220 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 12:56:11.684429 kubelet[2220]: I0302 12:56:11.682031 2220 server.go:1262] "Started kubelet" Mar 2 12:56:11.684429 kubelet[2220]: I0302 12:56:11.682613 2220 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:56:11.686401 kubelet[2220]: I0302 12:56:11.684699 2220 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:56:11.686401 kubelet[2220]: I0302 12:56:11.684940 2220 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 12:56:11.686401 kubelet[2220]: I0302 12:56:11.686128 2220 server.go:310] "Adding debug handlers to kubelet server" Mar 2 12:56:11.686886 kubelet[2220]: I0302 12:56:11.686857 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:56:11.689796 kubelet[2220]: I0302 12:56:11.689719 2220 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:56:11.694344 kubelet[2220]: I0302 12:56:11.693668 2220 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:56:11.700836 kubelet[2220]: E0302 12:56:11.700796 2220 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:56:11.701092 kubelet[2220]: I0302 12:56:11.701073 2220 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 12:56:11.707382 kubelet[2220]: I0302 12:56:11.704951 2220 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 12:56:11.707382 kubelet[2220]: I0302 12:56:11.705259 2220 reconciler.go:29] "Reconciler: start to sync state" Mar 2 12:56:11.707382 kubelet[2220]: E0302 12:56:11.706122 2220 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:56:11.707382 kubelet[2220]: E0302 12:56:11.706829 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:56:11.708885 kubelet[2220]: E0302 12:56:11.708809 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Mar 2 12:56:11.745416 kubelet[2220]: E0302 12:56:11.737981 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899077f4d0b75a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:56:11.681838496 +0000 UTC m=+3.029363566,LastTimestamp:2026-03-02 12:56:11.681838496 +0000 UTC m=+3.029363566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:56:11.747824 kubelet[2220]: I0302 12:56:11.738656 2220 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:56:11.748351 kubelet[2220]: I0302 12:56:11.748261 2220 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:56:11.754345 kubelet[2220]: I0302 12:56:11.753890 2220 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:56:11.757671 kubelet[2220]: I0302 12:56:11.757405 2220 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 12:56:11.802061 kubelet[2220]: E0302 12:56:11.801981 2220 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:56:11.833541 kubelet[2220]: I0302 12:56:11.832692 2220 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:56:11.833541 kubelet[2220]: I0302 12:56:11.832738 2220 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:56:11.833541 kubelet[2220]: I0302 12:56:11.832806 2220 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:56:11.865384 kubelet[2220]: I0302 12:56:11.864706 2220 policy_none.go:49] "None policy: Start" Mar 2 12:56:11.867155 kubelet[2220]: I0302 12:56:11.866700 2220 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 12:56:11.913370 kubelet[2220]: I0302 12:56:11.867025 2220 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 12:56:11.914777 kubelet[2220]: E0302 12:56:11.914185 2220 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:56:11.917015 kubelet[2220]: E0302 12:56:11.915396 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Mar 2 12:56:11.918677 kubelet[2220]: I0302 12:56:11.918579 2220 policy_none.go:47] "Start" Mar 2 12:56:11.921054 kubelet[2220]: I0302 12:56:11.920777 2220 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 12:56:11.921817 kubelet[2220]: I0302 12:56:11.921181 2220 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 12:56:11.922825 kubelet[2220]: I0302 12:56:11.922679 2220 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 12:56:11.932834 kubelet[2220]: E0302 12:56:11.923055 2220 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:56:11.944754 kubelet[2220]: E0302 12:56:11.940939 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:56:11.954613 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 12:56:11.996666 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 12:56:12.018225 kubelet[2220]: E0302 12:56:12.015416 2220 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:56:12.031585 kubelet[2220]: E0302 12:56:12.031523 2220 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 12:56:12.038257 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 12:56:12.062023 kubelet[2220]: E0302 12:56:12.060387 2220 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:56:12.062023 kubelet[2220]: I0302 12:56:12.060824 2220 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:56:12.069176 kubelet[2220]: I0302 12:56:12.066475 2220 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:56:12.069176 kubelet[2220]: E0302 12:56:12.067050 2220 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:56:12.069176 kubelet[2220]: E0302 12:56:12.067344 2220 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 12:56:12.069176 kubelet[2220]: I0302 12:56:12.067687 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:56:12.310035 kubelet[2220]: I0302 12:56:12.305937 2220 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:12.329010 kubelet[2220]: E0302 12:56:12.324655 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Mar 2 12:56:12.329975 kubelet[2220]: E0302 12:56:12.329866 2220 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 2 12:56:12.347606 kubelet[2220]: I0302 12:56:12.346751 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:12.347606 kubelet[2220]: I0302 12:56:12.346996 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:12.347606 kubelet[2220]: I0302 12:56:12.347035 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:12.347606 kubelet[2220]: I0302 12:56:12.347173 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:12.347606 kubelet[2220]: I0302 12:56:12.347250 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:12.378711 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 2 12:56:12.400876 kubelet[2220]: E0302 12:56:12.400523 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:12.409967 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 2 12:56:12.417804 kubelet[2220]: E0302 12:56:12.417699 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:12.451933 kubelet[2220]: I0302 12:56:12.451425 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3b0b9cdf291e2fe824cbcebfeace163-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b0b9cdf291e2fe824cbcebfeace163\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:12.451933 kubelet[2220]: I0302 12:56:12.451507 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3b0b9cdf291e2fe824cbcebfeace163-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b0b9cdf291e2fe824cbcebfeace163\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:12.451933 kubelet[2220]: I0302 12:56:12.451806 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3b0b9cdf291e2fe824cbcebfeace163-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b3b0b9cdf291e2fe824cbcebfeace163\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:12.451933 kubelet[2220]: I0302 12:56:12.451935 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:12.458695 systemd[1]: Created slice kubepods-burstable-podb3b0b9cdf291e2fe824cbcebfeace163.slice - libcontainer container kubepods-burstable-podb3b0b9cdf291e2fe824cbcebfeace163.slice. Mar 2 12:56:12.463418 kubelet[2220]: E0302 12:56:12.463128 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:12.508414 kubelet[2220]: E0302 12:56:12.507148 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:56:12.667997 kubelet[2220]: I0302 12:56:12.662438 2220 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:12.667997 kubelet[2220]: E0302 12:56:12.666114 2220 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 2 12:56:12.701677 kubelet[2220]: E0302 12:56:12.701540 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:56:12.715452 kubelet[2220]: E0302 12:56:12.712469 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:12.723030 containerd[1470]: time="2026-03-02T12:56:12.719509629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:12.742526 kubelet[2220]: E0302 12:56:12.742444 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:12.744407 containerd[1470]: time="2026-03-02T12:56:12.744122460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:12.854444 kubelet[2220]: E0302 12:56:12.853972 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:56:12.859388 kubelet[2220]: E0302 12:56:12.858719 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:12.860880 containerd[1470]: time="2026-03-02T12:56:12.860693711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b3b0b9cdf291e2fe824cbcebfeace163,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:13.121708 kubelet[2220]: I0302 12:56:13.119699 2220 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:13.121708 kubelet[2220]: E0302 12:56:13.121101 2220 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 2 12:56:13.170143 kubelet[2220]: E0302 12:56:13.167001 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Mar 2 12:56:13.316009 kubelet[2220]: E0302 12:56:13.314997 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:56:13.670832 kubelet[2220]: E0302 12:56:13.670477 2220 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:56:13.943162 kubelet[2220]: I0302 12:56:13.940979 2220 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:13.943162 kubelet[2220]: E0302 12:56:13.941996 2220 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 2 12:56:13.948072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607542811.mount: Deactivated successfully. Mar 2 12:56:13.959059 containerd[1470]: time="2026-03-02T12:56:13.958605516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:13.966588 containerd[1470]: time="2026-03-02T12:56:13.966459322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 12:56:13.971103 containerd[1470]: time="2026-03-02T12:56:13.970627514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:13.975901 containerd[1470]: time="2026-03-02T12:56:13.974860587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:56:13.977609 containerd[1470]: time="2026-03-02T12:56:13.977516419Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:13.981600 containerd[1470]: time="2026-03-02T12:56:13.981474900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:13.986259 containerd[1470]: time="2026-03-02T12:56:13.985810651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:56:13.989418 containerd[1470]: time="2026-03-02T12:56:13.989177845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:13.991530 containerd[1470]: time="2026-03-02T12:56:13.991428807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.269594504s" Mar 2 12:56:13.994945 containerd[1470]: time="2026-03-02T12:56:13.994519152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.133575496s" Mar 2 12:56:14.010174 containerd[1470]: time="2026-03-02T12:56:14.009235009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.264564567s" Mar 2 12:56:14.775962 kubelet[2220]: E0302 12:56:14.774131 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="3.2s" Mar 2 12:56:14.781321 containerd[1470]: time="2026-03-02T12:56:14.780114736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:14.781321 containerd[1470]: time="2026-03-02T12:56:14.780797522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:14.781321 containerd[1470]: time="2026-03-02T12:56:14.780833309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:14.781321 containerd[1470]: time="2026-03-02T12:56:14.781060953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:14.802850 containerd[1470]: time="2026-03-02T12:56:14.802621330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:14.802850 containerd[1470]: time="2026-03-02T12:56:14.802735804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:14.802850 containerd[1470]: time="2026-03-02T12:56:14.802757515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:14.803193 containerd[1470]: time="2026-03-02T12:56:14.802886585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:14.829702 kubelet[2220]: E0302 12:56:14.819430 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:56:14.860748 containerd[1470]: time="2026-03-02T12:56:14.839700405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:14.860748 containerd[1470]: time="2026-03-02T12:56:14.840155984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:14.860748 containerd[1470]: time="2026-03-02T12:56:14.840523029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:14.860748 containerd[1470]: time="2026-03-02T12:56:14.840845970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:14.949455 kubelet[2220]: E0302 12:56:14.948648 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:56:14.963928 kubelet[2220]: E0302 12:56:14.963588 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:56:15.005099 systemd[1]: Started cri-containerd-531feaf45b42b7374d65b04b66896abf471b9d610f3ebe003f294aceda42f067.scope - libcontainer container 531feaf45b42b7374d65b04b66896abf471b9d610f3ebe003f294aceda42f067. Mar 2 12:56:15.013846 systemd[1]: Started cri-containerd-4ac950be0af920e3710ca4523c0bf4cb27c35adbda206cf79a27ae1721a9deb8.scope - libcontainer container 4ac950be0af920e3710ca4523c0bf4cb27c35adbda206cf79a27ae1721a9deb8. Mar 2 12:56:15.084984 systemd[1]: Started cri-containerd-3222cd64cd269df2f79e4b96e64c64ff9b87124565e53515322a4eb431ea91a2.scope - libcontainer container 3222cd64cd269df2f79e4b96e64c64ff9b87124565e53515322a4eb431ea91a2. Mar 2 12:56:15.653654 kubelet[2220]: I0302 12:56:15.652987 2220 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:15.656802 kubelet[2220]: E0302 12:56:15.656638 2220 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 2 12:56:15.687752 containerd[1470]: time="2026-03-02T12:56:15.685545954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ac950be0af920e3710ca4523c0bf4cb27c35adbda206cf79a27ae1721a9deb8\"" Mar 2 12:56:15.690856 containerd[1470]: time="2026-03-02T12:56:15.688774714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b3b0b9cdf291e2fe824cbcebfeace163,Namespace:kube-system,Attempt:0,} returns sandbox id \"531feaf45b42b7374d65b04b66896abf471b9d610f3ebe003f294aceda42f067\"" Mar 2 12:56:15.690921 kubelet[2220]: E0302 12:56:15.690411 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:15.693412 kubelet[2220]: E0302 12:56:15.691674 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:15.769011 kubelet[2220]: E0302 12:56:15.767161 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:56:15.778379 containerd[1470]: time="2026-03-02T12:56:15.773176131Z" level=info msg="CreateContainer within sandbox \"4ac950be0af920e3710ca4523c0bf4cb27c35adbda206cf79a27ae1721a9deb8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 12:56:15.778379 containerd[1470]: time="2026-03-02T12:56:15.774964415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3222cd64cd269df2f79e4b96e64c64ff9b87124565e53515322a4eb431ea91a2\"" Mar 2 12:56:15.778379 containerd[1470]: time="2026-03-02T12:56:15.775259913Z" level=info msg="CreateContainer within sandbox \"531feaf45b42b7374d65b04b66896abf471b9d610f3ebe003f294aceda42f067\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 12:56:15.782413 kubelet[2220]: E0302 12:56:15.781466 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:15.812419 containerd[1470]: time="2026-03-02T12:56:15.811588432Z" level=info msg="CreateContainer within sandbox \"3222cd64cd269df2f79e4b96e64c64ff9b87124565e53515322a4eb431ea91a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 12:56:16.209823 containerd[1470]: time="2026-03-02T12:56:16.208675827Z" level=info msg="CreateContainer within sandbox \"4ac950be0af920e3710ca4523c0bf4cb27c35adbda206cf79a27ae1721a9deb8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0fdb3b469ef31f08d5e37a2ac4424f31f5827cc861b3f9ddf751cb903a7f72e\"" Mar 2 12:56:16.211688 containerd[1470]: time="2026-03-02T12:56:16.211595459Z" level=info msg="StartContainer for \"a0fdb3b469ef31f08d5e37a2ac4424f31f5827cc861b3f9ddf751cb903a7f72e\"" Mar 2 12:56:16.268814 containerd[1470]: time="2026-03-02T12:56:16.260092459Z" level=info msg="CreateContainer within sandbox \"531feaf45b42b7374d65b04b66896abf471b9d610f3ebe003f294aceda42f067\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cdc2f5322e59a42c86d2b622f899c498a32383c3c400187dcdceb7eb1756a722\"" Mar 2 12:56:16.284835 containerd[1470]: time="2026-03-02T12:56:16.272124977Z" level=info msg="StartContainer for \"cdc2f5322e59a42c86d2b622f899c498a32383c3c400187dcdceb7eb1756a722\"" Mar 2 12:56:16.480401 containerd[1470]: time="2026-03-02T12:56:16.476132845Z" level=info msg="CreateContainer within sandbox \"3222cd64cd269df2f79e4b96e64c64ff9b87124565e53515322a4eb431ea91a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf9fd746f95bb4c92f2829ac85ea46af7391cbd23c2e9ad59b3afcc2a0bf4669\"" Mar 2 12:56:16.513030 containerd[1470]: time="2026-03-02T12:56:16.512923863Z" level=info msg="StartContainer for \"bf9fd746f95bb4c92f2829ac85ea46af7391cbd23c2e9ad59b3afcc2a0bf4669\"" Mar 2 12:56:17.585551 kubelet[2220]: E0302 12:56:17.584434 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899077f4d0b75a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:56:11.681838496 +0000 UTC m=+3.029363566,LastTimestamp:2026-03-02 12:56:11.681838496 +0000 UTC m=+3.029363566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:56:17.585781 systemd[1]: Started cri-containerd-a0fdb3b469ef31f08d5e37a2ac4424f31f5827cc861b3f9ddf751cb903a7f72e.scope - libcontainer container a0fdb3b469ef31f08d5e37a2ac4424f31f5827cc861b3f9ddf751cb903a7f72e. Mar 2 12:56:18.420066 kubelet[2220]: E0302 12:56:18.413260 2220 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:56:18.420066 kubelet[2220]: E0302 12:56:18.413569 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="6.4s" Mar 2 12:56:18.704930 systemd[1]: Started cri-containerd-bf9fd746f95bb4c92f2829ac85ea46af7391cbd23c2e9ad59b3afcc2a0bf4669.scope - libcontainer container bf9fd746f95bb4c92f2829ac85ea46af7391cbd23c2e9ad59b3afcc2a0bf4669. Mar 2 12:56:18.899417 kubelet[2220]: I0302 12:56:18.899086 2220 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:18.906609 kubelet[2220]: E0302 12:56:18.899903 2220 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 2 12:56:18.904685 systemd[1]: Started cri-containerd-cdc2f5322e59a42c86d2b622f899c498a32383c3c400187dcdceb7eb1756a722.scope - libcontainer container cdc2f5322e59a42c86d2b622f899c498a32383c3c400187dcdceb7eb1756a722. Mar 2 12:56:19.152527 containerd[1470]: time="2026-03-02T12:56:19.102181836Z" level=info msg="StartContainer for \"bf9fd746f95bb4c92f2829ac85ea46af7391cbd23c2e9ad59b3afcc2a0bf4669\" returns successfully" Mar 2 12:56:19.227605 containerd[1470]: time="2026-03-02T12:56:19.223695160Z" level=info msg="StartContainer for \"a0fdb3b469ef31f08d5e37a2ac4424f31f5827cc861b3f9ddf751cb903a7f72e\" returns successfully" Mar 2 12:56:19.292136 containerd[1470]: time="2026-03-02T12:56:19.291573455Z" level=info msg="StartContainer for \"cdc2f5322e59a42c86d2b622f899c498a32383c3c400187dcdceb7eb1756a722\" returns successfully" Mar 2 12:56:19.778113 kubelet[2220]: E0302 12:56:19.777382 2220 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:56:19.809040 kubelet[2220]: E0302 12:56:19.803804 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:19.969089 kubelet[2220]: E0302 12:56:19.965958 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:20.482491 kubelet[2220]: E0302 12:56:20.480778 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:20.486379 kubelet[2220]: E0302 12:56:20.484906 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:20.546952 kubelet[2220]: E0302 12:56:20.544614 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:20.546952 kubelet[2220]: E0302 12:56:20.545559 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:21.742516 kubelet[2220]: E0302 12:56:21.741679 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:21.742516 kubelet[2220]: E0302 12:56:21.741885 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:21.742516 kubelet[2220]: E0302 12:56:21.742745 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:21.755895 kubelet[2220]: E0302 12:56:21.751062 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:21.755895 kubelet[2220]: E0302 12:56:21.755830 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:21.759116 kubelet[2220]: E0302 12:56:21.756362 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:22.069673 kubelet[2220]: E0302 12:56:22.069446 2220 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 12:56:22.746810 kubelet[2220]: E0302 12:56:22.742794 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:22.746810 kubelet[2220]: E0302 12:56:22.744884 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:22.746810 kubelet[2220]: E0302 12:56:22.746380 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:22.751879 kubelet[2220]: E0302 12:56:22.751607 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:23.889837 kubelet[2220]: E0302 12:56:23.888597 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:23.892691 kubelet[2220]: E0302 12:56:23.892354 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:25.366688 kubelet[2220]: I0302 12:56:25.365347 2220 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:26.049577 kubelet[2220]: E0302 12:56:26.047079 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:26.049577 kubelet[2220]: E0302 12:56:26.048221 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:26.716842 kubelet[2220]: E0302 12:56:26.714721 2220 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:26.716842 kubelet[2220]: E0302 12:56:26.715597 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:29.445239 kubelet[2220]: E0302 12:56:29.443498 2220 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 12:56:29.489586 kubelet[2220]: I0302 12:56:29.489379 2220 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:56:29.516256 kubelet[2220]: I0302 12:56:29.511035 2220 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:29.655434 kubelet[2220]: E0302 12:56:29.654106 2220 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899077f4d0b75a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:56:11.681838496 +0000 UTC m=+3.029363566,LastTimestamp:2026-03-02 12:56:11.681838496 +0000 UTC m=+3.029363566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:56:29.702387 kubelet[2220]: E0302 12:56:29.701881 2220 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:29.702387 kubelet[2220]: I0302 12:56:29.701927 2220 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:29.708080 kubelet[2220]: E0302 12:56:29.707654 2220 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:29.708080 kubelet[2220]: I0302 12:56:29.707705 2220 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:29.714260 kubelet[2220]: E0302 12:56:29.714214 2220 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:29.722555 kubelet[2220]: E0302 12:56:29.719935 2220 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899077f4e7d7537 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:56:11.706086711 +0000 UTC m=+3.053611761,LastTimestamp:2026-03-02 12:56:11.706086711 +0000 UTC m=+3.053611761,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:56:30.117091 kubelet[2220]: I0302 12:56:30.115719 2220 apiserver.go:52] "Watching apiserver" Mar 2 12:56:30.212256 kubelet[2220]: I0302 12:56:30.210912 2220 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 12:56:33.517235 kubelet[2220]: I0302 12:56:33.515902 2220 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:33.566862 kubelet[2220]: E0302 12:56:33.566794 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:34.344952 systemd[1]: Reloading requested from client PID 2520 ('systemctl') (unit session-7.scope)... Mar 2 12:56:34.345010 systemd[1]: Reloading... Mar 2 12:56:34.544916 kubelet[2220]: E0302 12:56:34.544737 2220 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:34.608472 zram_generator::config[2556]: No configuration found. Mar 2 12:56:35.120859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:56:35.358784 systemd[1]: Reloading finished in 1012 ms. Mar 2 12:56:35.447718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:35.473055 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 12:56:35.474045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:35.474154 systemd[1]: kubelet.service: Consumed 10.310s CPU time, 130.9M memory peak, 0B memory swap peak. Mar 2 12:56:35.488407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:35.947882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:35.950543 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:56:36.275699 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:56:36.275699 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:56:36.275699 kubelet[2603]: I0302 12:56:36.275759 2603 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:56:36.318602 kubelet[2603]: I0302 12:56:36.317685 2603 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 12:56:36.318602 kubelet[2603]: I0302 12:56:36.317832 2603 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:56:36.318602 kubelet[2603]: I0302 12:56:36.318018 2603 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 12:56:36.318602 kubelet[2603]: I0302 12:56:36.318140 2603 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:56:36.342260 kubelet[2603]: I0302 12:56:36.319142 2603 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:56:36.350538 kubelet[2603]: I0302 12:56:36.348092 2603 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 12:56:36.356405 kubelet[2603]: I0302 12:56:36.355728 2603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:56:36.369403 kubelet[2603]: E0302 12:56:36.368863 2603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:56:36.369403 kubelet[2603]: I0302 12:56:36.369016 2603 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 12:56:36.462030 kubelet[2603]: I0302 12:56:36.460897 2603 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 12:56:36.462030 kubelet[2603]: I0302 12:56:36.461933 2603 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:56:36.462030 kubelet[2603]: I0302 12:56:36.462130 2603 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:56:36.462030 kubelet[2603]: I0302 12:56:36.462559 2603 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:56:36.464130 kubelet[2603]: I0302 12:56:36.462573 2603 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 12:56:36.464130 kubelet[2603]: I0302 12:56:36.462714 2603 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 12:56:36.464130 kubelet[2603]: I0302 12:56:36.463720 2603 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:56:36.464130 kubelet[2603]: I0302 12:56:36.463957 2603 kubelet.go:475] "Attempting to sync node with API server" Mar 2 12:56:36.464130 kubelet[2603]: I0302 12:56:36.463969 2603 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:56:36.464130 kubelet[2603]: I0302 12:56:36.463997 2603 kubelet.go:387] "Adding apiserver pod source" Mar 2 12:56:36.464130 kubelet[2603]: I0302 12:56:36.464013 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:56:36.545135 kubelet[2603]: I0302 12:56:36.543565 2603 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:56:36.545135 kubelet[2603]: I0302 12:56:36.544778 2603 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:56:36.545135 kubelet[2603]: I0302 12:56:36.544824 2603 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 12:56:36.561618 kubelet[2603]: I0302 12:56:36.559931 2603 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:56:36.561618 kubelet[2603]: I0302 12:56:36.561507 2603 server.go:1262] "Started kubelet" Mar 2 12:56:36.561618 kubelet[2603]: I0302 12:56:36.561595 2603 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:56:36.561914 kubelet[2603]: I0302 12:56:36.561667 2603 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 12:56:36.567121 kubelet[2603]: I0302 12:56:36.563173 2603 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:56:36.567121 kubelet[2603]: I0302 12:56:36.565128 2603 server.go:310] "Adding debug handlers to kubelet server" Mar 2 12:56:36.568074 kubelet[2603]: I0302 12:56:36.567498 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:56:36.572188 kubelet[2603]: I0302 12:56:36.570402 2603 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 12:56:36.572188 kubelet[2603]: I0302 12:56:36.570607 2603 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 12:56:36.572188 kubelet[2603]: I0302 12:56:36.570764 2603 reconciler.go:29] "Reconciler: start to sync state" Mar 2 12:56:36.575429 kubelet[2603]: I0302 12:56:36.574768 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:56:36.577175 kubelet[2603]: I0302 12:56:36.577149 2603 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:56:36.578272 kubelet[2603]: E0302 12:56:36.578063 2603 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:56:36.578272 kubelet[2603]: I0302 12:56:36.578022 2603 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:56:36.586402 kubelet[2603]: I0302 12:56:36.585890 2603 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:56:36.610764 kubelet[2603]: I0302 12:56:36.610708 2603 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 12:56:36.616767 kubelet[2603]: I0302 12:56:36.616727 2603 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 12:56:36.616950 kubelet[2603]: I0302 12:56:36.616929 2603 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 12:56:36.618488 kubelet[2603]: I0302 12:56:36.617111 2603 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 12:56:36.618488 kubelet[2603]: E0302 12:56:36.617430 2603 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:56:36.724997 kubelet[2603]: E0302 12:56:36.718849 2603 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 12:56:36.729808 kubelet[2603]: I0302 12:56:36.729707 2603 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:56:36.730035 kubelet[2603]: I0302 12:56:36.729982 2603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:56:36.730888 kubelet[2603]: I0302 12:56:36.730140 2603 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:56:36.731410 kubelet[2603]: I0302 12:56:36.731216 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 12:56:36.731729 kubelet[2603]: I0302 12:56:36.731607 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 12:56:36.731899 kubelet[2603]: I0302 12:56:36.731765 2603 policy_none.go:49] "None policy: Start" Mar 2 12:56:36.731958 kubelet[2603]: I0302 12:56:36.731909 2603 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 12:56:36.732377 kubelet[2603]: I0302 12:56:36.732095 2603 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 12:56:36.733220 kubelet[2603]: I0302 12:56:36.733097 2603 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 12:56:36.733455 kubelet[2603]: I0302 12:56:36.733251 2603 policy_none.go:47] "Start" Mar 2 12:56:36.746881 kubelet[2603]: E0302 12:56:36.746788 2603 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:56:36.747203 kubelet[2603]: I0302 12:56:36.747105 2603 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:56:36.747203 kubelet[2603]: I0302 12:56:36.747164 2603 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:56:36.750462 kubelet[2603]: I0302 12:56:36.750100 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:56:36.754736 kubelet[2603]: E0302 12:56:36.754188 2603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:56:36.871465 kubelet[2603]: I0302 12:56:36.867845 2603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:56:36.900866 kubelet[2603]: I0302 12:56:36.899476 2603 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 12:56:36.900866 kubelet[2603]: I0302 12:56:36.899605 2603 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:56:36.928663 kubelet[2603]: I0302 12:56:36.927778 2603 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:36.928663 kubelet[2603]: I0302 12:56:36.928158 2603 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:36.928663 kubelet[2603]: I0302 12:56:36.928613 2603 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:36.958853 kubelet[2603]: E0302 12:56:36.958655 2603 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:36.975989 kubelet[2603]: I0302 12:56:36.974708 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:36.978153 kubelet[2603]: I0302 12:56:36.976766 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:37.002078 kubelet[2603]: I0302 12:56:36.978671 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:37.002078 kubelet[2603]: I0302 12:56:36.978705 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:37.002078 kubelet[2603]: I0302 12:56:36.978798 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3b0b9cdf291e2fe824cbcebfeace163-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b0b9cdf291e2fe824cbcebfeace163\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:37.002078 kubelet[2603]: I0302 12:56:36.978822 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:37.002078 kubelet[2603]: I0302 12:56:36.978846 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:37.024563 kubelet[2603]: I0302 12:56:37.005886 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3b0b9cdf291e2fe824cbcebfeace163-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b0b9cdf291e2fe824cbcebfeace163\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:37.024563 kubelet[2603]: I0302 12:56:37.005929 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3b0b9cdf291e2fe824cbcebfeace163-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b3b0b9cdf291e2fe824cbcebfeace163\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:37.259215 kubelet[2603]: E0302 12:56:37.259158 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.260618 kubelet[2603]: E0302 12:56:37.260054 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.348470 kubelet[2603]: E0302 12:56:37.345965 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.496678 kubelet[2603]: I0302 12:56:37.495016 2603 apiserver.go:52] "Watching apiserver" Mar 2 12:56:37.574415 kubelet[2603]: I0302 12:56:37.572490 2603 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 12:56:37.694111 kubelet[2603]: I0302 12:56:37.692935 2603 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:37.708191 kubelet[2603]: I0302 12:56:37.705232 2603 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:37.716930 kubelet[2603]: E0302 12:56:37.716227 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.762404 kubelet[2603]: E0302 12:56:37.758653 2603 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:37.762404 kubelet[2603]: E0302 12:56:37.759661 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.771397 kubelet[2603]: E0302 12:56:37.770972 2603 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:37.774919 kubelet[2603]: E0302 12:56:37.774745 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.938242 kubelet[2603]: I0302 12:56:37.935715 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.935646347 podStartE2EDuration="1.935646347s" podCreationTimestamp="2026-03-02 12:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:37.906356683 +0000 UTC m=+1.934323641" watchObservedRunningTime="2026-03-02 12:56:37.935646347 +0000 UTC m=+1.963613295" Mar 2 12:56:37.961081 kubelet[2603]: I0302 12:56:37.960604 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.960585967 podStartE2EDuration="4.960585967s" podCreationTimestamp="2026-03-02 12:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:37.936786805 +0000 UTC m=+1.964753772" watchObservedRunningTime="2026-03-02 12:56:37.960585967 +0000 UTC m=+1.988552914" Mar 2 12:56:38.117869 kubelet[2603]: I0302 12:56:38.115162 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.113368082 podStartE2EDuration="2.113368082s" podCreationTimestamp="2026-03-02 12:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:37.961035394 +0000 UTC m=+1.989002352" watchObservedRunningTime="2026-03-02 12:56:38.113368082 +0000 UTC m=+2.141335051" Mar 2 12:56:38.696810 kubelet[2603]: E0302 12:56:38.696136 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:38.731990 kubelet[2603]: E0302 12:56:38.699398 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:38.744744 kubelet[2603]: E0302 12:56:38.699870 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:40.193154 kubelet[2603]: E0302 12:56:40.192242 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:41.421945 kubelet[2603]: I0302 12:56:41.420869 2603 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 12:56:41.435786 containerd[1470]: time="2026-03-02T12:56:41.425250061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 12:56:41.436272 kubelet[2603]: I0302 12:56:41.436033 2603 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 12:56:42.274204 kubelet[2603]: I0302 12:56:42.273958 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29cx6\" (UniqueName: \"kubernetes.io/projected/4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0-kube-api-access-29cx6\") pod \"kube-proxy-v8nxp\" (UID: \"4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0\") " pod="kube-system/kube-proxy-v8nxp" Mar 2 12:56:42.274204 kubelet[2603]: I0302 12:56:42.274020 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0-kube-proxy\") pod \"kube-proxy-v8nxp\" (UID: \"4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0\") " pod="kube-system/kube-proxy-v8nxp" Mar 2 12:56:42.274204 kubelet[2603]: I0302 12:56:42.274047 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0-xtables-lock\") pod \"kube-proxy-v8nxp\" (UID: \"4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0\") " pod="kube-system/kube-proxy-v8nxp" Mar 2 12:56:42.274204 kubelet[2603]: I0302 12:56:42.274067 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0-lib-modules\") pod \"kube-proxy-v8nxp\" (UID: \"4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0\") " pod="kube-system/kube-proxy-v8nxp" Mar 2 12:56:42.286081 kubelet[2603]: E0302 12:56:42.284468 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:42.293064 systemd[1]: Created slice kubepods-besteffort-pod4602682e_3c43_4e8e_ae0a_1b41ca7e2dc0.slice - libcontainer container kubepods-besteffort-pod4602682e_3c43_4e8e_ae0a_1b41ca7e2dc0.slice. Mar 2 12:56:43.168609 systemd[1]: Created slice kubepods-besteffort-pod48e1bf9e_d3b8_4acc_b868_f139a0375bc3.slice - libcontainer container kubepods-besteffort-pod48e1bf9e_d3b8_4acc_b868_f139a0375bc3.slice. Mar 2 12:56:43.229748 kubelet[2603]: I0302 12:56:43.221659 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48e1bf9e-d3b8-4acc-b868-f139a0375bc3-var-lib-calico\") pod \"tigera-operator-85979684d8-jpn26\" (UID: \"48e1bf9e-d3b8-4acc-b868-f139a0375bc3\") " pod="tigera-operator/tigera-operator-85979684d8-jpn26" Mar 2 12:56:43.229748 kubelet[2603]: I0302 12:56:43.221909 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w668m\" (UniqueName: \"kubernetes.io/projected/48e1bf9e-d3b8-4acc-b868-f139a0375bc3-kube-api-access-w668m\") pod \"tigera-operator-85979684d8-jpn26\" (UID: \"48e1bf9e-d3b8-4acc-b868-f139a0375bc3\") " pod="tigera-operator/tigera-operator-85979684d8-jpn26" Mar 2 12:56:43.257199 kubelet[2603]: E0302 12:56:43.257077 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:43.261648 containerd[1470]: time="2026-03-02T12:56:43.261521892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v8nxp,Uid:4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:43.291760 kubelet[2603]: E0302 12:56:43.290459 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:43.392620 containerd[1470]: time="2026-03-02T12:56:43.391174532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:43.400728 containerd[1470]: time="2026-03-02T12:56:43.395412218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:43.400728 containerd[1470]: time="2026-03-02T12:56:43.395574261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:43.400728 containerd[1470]: time="2026-03-02T12:56:43.395704835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:43.597639 containerd[1470]: time="2026-03-02T12:56:43.587848439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-85979684d8-jpn26,Uid:48e1bf9e-d3b8-4acc-b868-f139a0375bc3,Namespace:tigera-operator,Attempt:0,}" Mar 2 12:56:44.703404 kubelet[2603]: E0302 12:56:44.701586 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:44.716780 systemd[1]: Started cri-containerd-ed8a6375f9aaf8aaee029b6de979740af6458056b6678811345a9ef2fe1e752c.scope - libcontainer container ed8a6375f9aaf8aaee029b6de979740af6458056b6678811345a9ef2fe1e752c. Mar 2 12:56:44.801625 containerd[1470]: time="2026-03-02T12:56:44.801102202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:44.801625 containerd[1470]: time="2026-03-02T12:56:44.801185888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:44.801625 containerd[1470]: time="2026-03-02T12:56:44.801205334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:44.801625 containerd[1470]: time="2026-03-02T12:56:44.801429742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:45.164667 containerd[1470]: time="2026-03-02T12:56:45.164186742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v8nxp,Uid:4602682e-3c43-4e8e-ae0a-1b41ca7e2dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed8a6375f9aaf8aaee029b6de979740af6458056b6678811345a9ef2fe1e752c\"" Mar 2 12:56:45.169880 kubelet[2603]: E0302 12:56:45.169461 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:45.185246 containerd[1470]: time="2026-03-02T12:56:45.184851980Z" level=info msg="CreateContainer within sandbox \"ed8a6375f9aaf8aaee029b6de979740af6458056b6678811345a9ef2fe1e752c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 12:56:45.225551 containerd[1470]: time="2026-03-02T12:56:45.225211429Z" level=info msg="CreateContainer within sandbox \"ed8a6375f9aaf8aaee029b6de979740af6458056b6678811345a9ef2fe1e752c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d41be3c855d53503e35a7d4368ed981b5fbc4d8fc2e1c51d9ed34b091d4c72b8\"" Mar 2 12:56:45.240757 containerd[1470]: time="2026-03-02T12:56:45.240586268Z" level=info msg="StartContainer for \"d41be3c855d53503e35a7d4368ed981b5fbc4d8fc2e1c51d9ed34b091d4c72b8\"" Mar 2 12:56:45.253725 systemd[1]: Started cri-containerd-acfee186e0977bc4b2b7ba277732c9cc79b73877b71f4e2eae06dbf95f4d7fa7.scope - libcontainer container acfee186e0977bc4b2b7ba277732c9cc79b73877b71f4e2eae06dbf95f4d7fa7. Mar 2 12:56:45.696016 kubelet[2603]: E0302 12:56:45.690488 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:45.706573 kubelet[2603]: E0302 12:56:45.705090 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:45.767201 systemd[1]: Started cri-containerd-d41be3c855d53503e35a7d4368ed981b5fbc4d8fc2e1c51d9ed34b091d4c72b8.scope - libcontainer container d41be3c855d53503e35a7d4368ed981b5fbc4d8fc2e1c51d9ed34b091d4c72b8. Mar 2 12:56:46.060016 containerd[1470]: time="2026-03-02T12:56:45.873024338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-85979684d8-jpn26,Uid:48e1bf9e-d3b8-4acc-b868-f139a0375bc3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"acfee186e0977bc4b2b7ba277732c9cc79b73877b71f4e2eae06dbf95f4d7fa7\"" Mar 2 12:56:46.248771 containerd[1470]: time="2026-03-02T12:56:46.248585081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 12:56:46.286678 containerd[1470]: time="2026-03-02T12:56:46.286587700Z" level=info msg="StartContainer for \"d41be3c855d53503e35a7d4368ed981b5fbc4d8fc2e1c51d9ed34b091d4c72b8\" returns successfully" Mar 2 12:56:47.001631 kubelet[2603]: E0302 12:56:46.997802 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:47.397569 kubelet[2603]: E0302 12:56:47.396062 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:47.454188 kubelet[2603]: I0302 12:56:47.454119 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v8nxp" podStartSLOduration=5.454094256 podStartE2EDuration="5.454094256s" podCreationTimestamp="2026-03-02 12:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:47.073663349 +0000 UTC m=+11.101630318" watchObservedRunningTime="2026-03-02 12:56:47.454094256 +0000 UTC m=+11.482061215" Mar 2 12:56:48.211830 kubelet[2603]: E0302 12:56:48.209851 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:48.220046 kubelet[2603]: E0302 12:56:48.216907 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:48.476158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22469445.mount: Deactivated successfully. Mar 2 12:56:56.727896 containerd[1470]: time="2026-03-02T12:56:56.727706004Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:56.729144 containerd[1470]: time="2026-03-02T12:56:56.728992392Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=40822719" Mar 2 12:56:56.733842 containerd[1470]: time="2026-03-02T12:56:56.733711708Z" level=info msg="ImageCreate event name:\"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:56.749755 containerd[1470]: time="2026-03-02T12:56:56.748942819Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:56.761624 containerd[1470]: time="2026-03-02T12:56:56.753693478Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"40818714\" in 10.505025603s" Mar 2 12:56:56.761624 containerd[1470]: time="2026-03-02T12:56:56.753736909Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\"" Mar 2 12:56:56.831730 containerd[1470]: time="2026-03-02T12:56:56.831532183Z" level=info msg="CreateContainer within sandbox \"acfee186e0977bc4b2b7ba277732c9cc79b73877b71f4e2eae06dbf95f4d7fa7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 12:56:56.868875 containerd[1470]: time="2026-03-02T12:56:56.868678373Z" level=info msg="CreateContainer within sandbox \"acfee186e0977bc4b2b7ba277732c9cc79b73877b71f4e2eae06dbf95f4d7fa7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"51a348b1c81c998b290dbc5be353704f3952107138f51e6a8f17a31a3657a444\"" Mar 2 12:56:56.869858 containerd[1470]: time="2026-03-02T12:56:56.869638439Z" level=info msg="StartContainer for \"51a348b1c81c998b290dbc5be353704f3952107138f51e6a8f17a31a3657a444\"" Mar 2 12:56:56.995958 systemd[1]: Started cri-containerd-51a348b1c81c998b290dbc5be353704f3952107138f51e6a8f17a31a3657a444.scope - libcontainer container 51a348b1c81c998b290dbc5be353704f3952107138f51e6a8f17a31a3657a444. Mar 2 12:56:57.288142 containerd[1470]: time="2026-03-02T12:56:57.288050551Z" level=info msg="StartContainer for \"51a348b1c81c998b290dbc5be353704f3952107138f51e6a8f17a31a3657a444\" returns successfully" Mar 2 12:56:58.233503 kubelet[2603]: I0302 12:56:58.233025 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-85979684d8-jpn26" podStartSLOduration=5.314670744 podStartE2EDuration="16.232983182s" podCreationTimestamp="2026-03-02 12:56:42 +0000 UTC" firstStartedPulling="2026-03-02 12:56:45.879055481 +0000 UTC m=+9.907022439" lastFinishedPulling="2026-03-02 12:56:56.797367929 +0000 UTC m=+20.825334877" observedRunningTime="2026-03-02 12:56:58.232267778 +0000 UTC m=+22.260234725" watchObservedRunningTime="2026-03-02 12:56:58.232983182 +0000 UTC m=+22.260950129" Mar 2 12:57:07.313681 sudo[1642]: pam_unix(sudo:session): session closed for user root Mar 2 12:57:07.351975 sshd[1639]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:07.376926 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:58478.service: Deactivated successfully. Mar 2 12:57:07.380122 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Mar 2 12:57:07.385775 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 12:57:07.405616 systemd[1]: session-7.scope: Consumed 19.250s CPU time, 165.4M memory peak, 0B memory swap peak. Mar 2 12:57:07.493643 systemd-logind[1440]: Removed session 7. Mar 2 12:57:15.629528 systemd[1]: Created slice kubepods-besteffort-podaa7c9079_a15a_4cef_84dc_999e267f3e80.slice - libcontainer container kubepods-besteffort-podaa7c9079_a15a_4cef_84dc_999e267f3e80.slice. Mar 2 12:57:15.661956 kubelet[2603]: I0302 12:57:15.659514 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59sd6\" (UniqueName: \"kubernetes.io/projected/aa7c9079-a15a-4cef-84dc-999e267f3e80-kube-api-access-59sd6\") pod \"calico-typha-7f58595dbc-z5db2\" (UID: \"aa7c9079-a15a-4cef-84dc-999e267f3e80\") " pod="calico-system/calico-typha-7f58595dbc-z5db2" Mar 2 12:57:15.661956 kubelet[2603]: I0302 12:57:15.659576 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aa7c9079-a15a-4cef-84dc-999e267f3e80-typha-certs\") pod \"calico-typha-7f58595dbc-z5db2\" (UID: \"aa7c9079-a15a-4cef-84dc-999e267f3e80\") " pod="calico-system/calico-typha-7f58595dbc-z5db2" Mar 2 12:57:15.661956 kubelet[2603]: I0302 12:57:15.659610 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa7c9079-a15a-4cef-84dc-999e267f3e80-tigera-ca-bundle\") pod \"calico-typha-7f58595dbc-z5db2\" (UID: \"aa7c9079-a15a-4cef-84dc-999e267f3e80\") " pod="calico-system/calico-typha-7f58595dbc-z5db2" Mar 2 12:57:15.968231 systemd[1]: Created slice kubepods-besteffort-pod4ae83cb2_66af_47ec_873c_ec41c190460c.slice - libcontainer container kubepods-besteffort-pod4ae83cb2_66af_47ec_873c_ec41c190460c.slice. Mar 2 12:57:16.002033 kubelet[2603]: I0302 12:57:16.001342 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-var-lib-calico\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002033 kubelet[2603]: I0302 12:57:16.001407 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-xtables-lock\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002033 kubelet[2603]: I0302 12:57:16.001439 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-sys-fs\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002033 kubelet[2603]: I0302 12:57:16.001462 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-nodeproc\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002033 kubelet[2603]: I0302 12:57:16.001483 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-bpffs\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002651 kubelet[2603]: I0302 12:57:16.001502 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-cni-bin-dir\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002651 kubelet[2603]: I0302 12:57:16.001524 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-cni-log-dir\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002651 kubelet[2603]: I0302 12:57:16.001547 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-cni-net-dir\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002651 kubelet[2603]: I0302 12:57:16.001568 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-policysync\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.002651 kubelet[2603]: I0302 12:57:16.001582 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-lib-modules\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.004357 kubelet[2603]: I0302 12:57:16.001594 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ae83cb2-66af-47ec-873c-ec41c190460c-tigera-ca-bundle\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.004357 kubelet[2603]: I0302 12:57:16.001608 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4ae83cb2-66af-47ec-873c-ec41c190460c-node-certs\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.004357 kubelet[2603]: I0302 12:57:16.001661 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-flexvol-driver-host\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.004357 kubelet[2603]: I0302 12:57:16.001688 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4ae83cb2-66af-47ec-873c-ec41c190460c-var-run-calico\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.004357 kubelet[2603]: I0302 12:57:16.001712 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flxm\" (UniqueName: \"kubernetes.io/projected/4ae83cb2-66af-47ec-873c-ec41c190460c-kube-api-access-7flxm\") pod \"calico-node-grbfj\" (UID: \"4ae83cb2-66af-47ec-873c-ec41c190460c\") " pod="calico-system/calico-node-grbfj" Mar 2 12:57:16.004357 kubelet[2603]: E0302 12:57:16.002995 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:16.004770 containerd[1470]: time="2026-03-02T12:57:16.004678856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f58595dbc-z5db2,Uid:aa7c9079-a15a-4cef-84dc-999e267f3e80,Namespace:calico-system,Attempt:0,}" Mar 2 12:57:16.248222 kubelet[2603]: E0302 12:57:16.232932 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:16.248222 kubelet[2603]: E0302 12:57:16.238584 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.248222 kubelet[2603]: W0302 12:57:16.238642 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.248222 kubelet[2603]: E0302 12:57:16.238752 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.248222 kubelet[2603]: E0302 12:57:16.241579 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.248222 kubelet[2603]: W0302 12:57:16.241599 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.248222 kubelet[2603]: E0302 12:57:16.241621 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.248222 kubelet[2603]: E0302 12:57:16.245274 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.248222 kubelet[2603]: W0302 12:57:16.245352 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.248800 kubelet[2603]: E0302 12:57:16.245382 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.274369 kubelet[2603]: E0302 12:57:16.255077 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.274369 kubelet[2603]: W0302 12:57:16.255507 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.274369 kubelet[2603]: E0302 12:57:16.255740 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.274369 kubelet[2603]: E0302 12:57:16.258736 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.274369 kubelet[2603]: W0302 12:57:16.258755 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.274369 kubelet[2603]: E0302 12:57:16.259012 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.274369 kubelet[2603]: E0302 12:57:16.261888 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.274369 kubelet[2603]: W0302 12:57:16.261906 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.274369 kubelet[2603]: E0302 12:57:16.261928 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.274369 kubelet[2603]: E0302 12:57:16.263520 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.274984 kubelet[2603]: W0302 12:57:16.263536 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.274984 kubelet[2603]: E0302 12:57:16.263555 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.274984 kubelet[2603]: E0302 12:57:16.264238 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.274984 kubelet[2603]: W0302 12:57:16.264256 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.274984 kubelet[2603]: E0302 12:57:16.264271 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.274984 kubelet[2603]: E0302 12:57:16.269060 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.274984 kubelet[2603]: W0302 12:57:16.269085 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.274984 kubelet[2603]: E0302 12:57:16.269110 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.275482 kubelet[2603]: E0302 12:57:16.275122 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.275482 kubelet[2603]: W0302 12:57:16.275147 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.275482 kubelet[2603]: E0302 12:57:16.275211 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.284071 kubelet[2603]: E0302 12:57:16.278972 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.284071 kubelet[2603]: W0302 12:57:16.279087 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.284071 kubelet[2603]: E0302 12:57:16.279117 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.284071 kubelet[2603]: E0302 12:57:16.282407 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.284071 kubelet[2603]: W0302 12:57:16.282503 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.284071 kubelet[2603]: E0302 12:57:16.282529 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.293152 kubelet[2603]: E0302 12:57:16.285831 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.293152 kubelet[2603]: W0302 12:57:16.285930 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.293152 kubelet[2603]: E0302 12:57:16.285956 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.293152 kubelet[2603]: E0302 12:57:16.289713 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.293152 kubelet[2603]: W0302 12:57:16.289739 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.293152 kubelet[2603]: E0302 12:57:16.289849 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.294782 kubelet[2603]: E0302 12:57:16.294685 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.294782 kubelet[2603]: W0302 12:57:16.294749 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.294782 kubelet[2603]: E0302 12:57:16.294780 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.298859 kubelet[2603]: E0302 12:57:16.298782 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.298976 kubelet[2603]: W0302 12:57:16.298840 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.298976 kubelet[2603]: E0302 12:57:16.298926 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.309857 kubelet[2603]: E0302 12:57:16.309389 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.309857 kubelet[2603]: W0302 12:57:16.309442 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.309857 kubelet[2603]: E0302 12:57:16.309475 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.317589 kubelet[2603]: E0302 12:57:16.317501 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.317589 kubelet[2603]: W0302 12:57:16.317557 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.317589 kubelet[2603]: E0302 12:57:16.317586 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.325035 containerd[1470]: time="2026-03-02T12:57:16.314742642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:57:16.325035 containerd[1470]: time="2026-03-02T12:57:16.314839341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:57:16.325035 containerd[1470]: time="2026-03-02T12:57:16.314857766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:16.325035 containerd[1470]: time="2026-03-02T12:57:16.315008785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:16.331515 kubelet[2603]: E0302 12:57:16.331406 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.331515 kubelet[2603]: W0302 12:57:16.331475 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.331515 kubelet[2603]: E0302 12:57:16.331509 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.331805 kubelet[2603]: I0302 12:57:16.331549 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6027383d-96c9-465b-88ba-00723209fa19-varrun\") pod \"csi-node-driver-ktfwf\" (UID: \"6027383d-96c9-465b-88ba-00723209fa19\") " pod="calico-system/csi-node-driver-ktfwf" Mar 2 12:57:16.334061 kubelet[2603]: E0302 12:57:16.333967 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.334061 kubelet[2603]: W0302 12:57:16.334029 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.334061 kubelet[2603]: E0302 12:57:16.334054 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.334658 kubelet[2603]: I0302 12:57:16.334091 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6027383d-96c9-465b-88ba-00723209fa19-kubelet-dir\") pod \"csi-node-driver-ktfwf\" (UID: \"6027383d-96c9-465b-88ba-00723209fa19\") " pod="calico-system/csi-node-driver-ktfwf" Mar 2 12:57:16.336847 containerd[1470]: time="2026-03-02T12:57:16.336581991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-grbfj,Uid:4ae83cb2-66af-47ec-873c-ec41c190460c,Namespace:calico-system,Attempt:0,}" Mar 2 12:57:16.337240 kubelet[2603]: E0302 12:57:16.337054 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.338905 kubelet[2603]: W0302 12:57:16.338463 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.338905 kubelet[2603]: E0302 12:57:16.338641 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.344356 kubelet[2603]: E0302 12:57:16.341366 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.344356 kubelet[2603]: W0302 12:57:16.341414 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.344356 kubelet[2603]: E0302 12:57:16.341438 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.344356 kubelet[2603]: I0302 12:57:16.341756 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6027383d-96c9-465b-88ba-00723209fa19-registration-dir\") pod \"csi-node-driver-ktfwf\" (UID: \"6027383d-96c9-465b-88ba-00723209fa19\") " pod="calico-system/csi-node-driver-ktfwf" Mar 2 12:57:16.344356 kubelet[2603]: E0302 12:57:16.342578 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.344356 kubelet[2603]: W0302 12:57:16.342686 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.344356 kubelet[2603]: E0302 12:57:16.342702 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.344356 kubelet[2603]: E0302 12:57:16.344018 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.344356 kubelet[2603]: W0302 12:57:16.344089 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.344949 kubelet[2603]: E0302 12:57:16.344105 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.349388 kubelet[2603]: E0302 12:57:16.345410 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.349388 kubelet[2603]: W0302 12:57:16.345490 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.349388 kubelet[2603]: E0302 12:57:16.345506 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.349388 kubelet[2603]: E0302 12:57:16.346623 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.349388 kubelet[2603]: W0302 12:57:16.346699 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.349388 kubelet[2603]: E0302 12:57:16.346719 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.349388 kubelet[2603]: I0302 12:57:16.346993 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6027383d-96c9-465b-88ba-00723209fa19-socket-dir\") pod \"csi-node-driver-ktfwf\" (UID: \"6027383d-96c9-465b-88ba-00723209fa19\") " pod="calico-system/csi-node-driver-ktfwf" Mar 2 12:57:16.349388 kubelet[2603]: E0302 12:57:16.347541 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.349388 kubelet[2603]: W0302 12:57:16.347554 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.349872 kubelet[2603]: E0302 12:57:16.347575 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.349872 kubelet[2603]: E0302 12:57:16.348150 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.349872 kubelet[2603]: W0302 12:57:16.348207 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.349872 kubelet[2603]: E0302 12:57:16.348223 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.349872 kubelet[2603]: E0302 12:57:16.349245 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.349872 kubelet[2603]: W0302 12:57:16.349260 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.349872 kubelet[2603]: E0302 12:57:16.349275 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.350138 kubelet[2603]: E0302 12:57:16.349923 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.350138 kubelet[2603]: W0302 12:57:16.349939 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.350138 kubelet[2603]: E0302 12:57:16.349954 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.353615 kubelet[2603]: E0302 12:57:16.350815 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.353615 kubelet[2603]: W0302 12:57:16.350898 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.353615 kubelet[2603]: E0302 12:57:16.350915 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.353615 kubelet[2603]: E0302 12:57:16.351682 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.353615 kubelet[2603]: W0302 12:57:16.351696 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.353615 kubelet[2603]: E0302 12:57:16.351711 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.357649 kubelet[2603]: E0302 12:57:16.355983 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.357649 kubelet[2603]: W0302 12:57:16.356040 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.357649 kubelet[2603]: E0302 12:57:16.356109 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.358826 kubelet[2603]: E0302 12:57:16.358468 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.358826 kubelet[2603]: W0302 12:57:16.358657 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.358826 kubelet[2603]: E0302 12:57:16.358757 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.401887 systemd[1]: Started cri-containerd-44b797bc1353fd79e341a431346bbdfb971d1ed8cc872c81730057b580edad7d.scope - libcontainer container 44b797bc1353fd79e341a431346bbdfb971d1ed8cc872c81730057b580edad7d. Mar 2 12:57:16.457852 kubelet[2603]: E0302 12:57:16.456907 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.457852 kubelet[2603]: W0302 12:57:16.456947 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.457852 kubelet[2603]: E0302 12:57:16.456985 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.458385 kubelet[2603]: E0302 12:57:16.458361 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.458435 kubelet[2603]: W0302 12:57:16.458389 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.458435 kubelet[2603]: E0302 12:57:16.458427 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.483006 kubelet[2603]: E0302 12:57:16.463741 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.483006 kubelet[2603]: W0302 12:57:16.463781 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.483006 kubelet[2603]: E0302 12:57:16.463811 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.483006 kubelet[2603]: I0302 12:57:16.463859 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz7rv\" (UniqueName: \"kubernetes.io/projected/6027383d-96c9-465b-88ba-00723209fa19-kube-api-access-pz7rv\") pod \"csi-node-driver-ktfwf\" (UID: \"6027383d-96c9-465b-88ba-00723209fa19\") " pod="calico-system/csi-node-driver-ktfwf" Mar 2 12:57:16.483006 kubelet[2603]: E0302 12:57:16.464555 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.483006 kubelet[2603]: W0302 12:57:16.464572 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.483006 kubelet[2603]: E0302 12:57:16.464591 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.483006 kubelet[2603]: E0302 12:57:16.464902 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.483006 kubelet[2603]: W0302 12:57:16.464914 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.483684 kubelet[2603]: E0302 12:57:16.464928 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.503658 kubelet[2603]: E0302 12:57:16.498567 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.503658 kubelet[2603]: W0302 12:57:16.498609 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.503658 kubelet[2603]: E0302 12:57:16.498643 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.548854 kubelet[2603]: E0302 12:57:16.542254 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.548854 kubelet[2603]: W0302 12:57:16.542382 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.548854 kubelet[2603]: E0302 12:57:16.542424 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.548854 kubelet[2603]: E0302 12:57:16.543088 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.548854 kubelet[2603]: W0302 12:57:16.543103 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.548854 kubelet[2603]: E0302 12:57:16.543126 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.548854 kubelet[2603]: E0302 12:57:16.547772 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.548854 kubelet[2603]: W0302 12:57:16.547791 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.548854 kubelet[2603]: E0302 12:57:16.547813 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.548854 kubelet[2603]: E0302 12:57:16.548395 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.549455 kubelet[2603]: W0302 12:57:16.548414 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.549455 kubelet[2603]: E0302 12:57:16.548433 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.549532 kubelet[2603]: E0302 12:57:16.549458 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.549532 kubelet[2603]: W0302 12:57:16.549477 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.549532 kubelet[2603]: E0302 12:57:16.549500 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.555961 kubelet[2603]: E0302 12:57:16.554810 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.555961 kubelet[2603]: W0302 12:57:16.554877 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.555961 kubelet[2603]: E0302 12:57:16.554901 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.570024 kubelet[2603]: E0302 12:57:16.567032 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.570024 kubelet[2603]: W0302 12:57:16.567079 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.570024 kubelet[2603]: E0302 12:57:16.567122 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.574459 kubelet[2603]: E0302 12:57:16.573557 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.574459 kubelet[2603]: W0302 12:57:16.573616 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.574459 kubelet[2603]: E0302 12:57:16.573646 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.585489 kubelet[2603]: E0302 12:57:16.584446 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.585489 kubelet[2603]: W0302 12:57:16.584484 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.585489 kubelet[2603]: E0302 12:57:16.584515 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.590685 kubelet[2603]: E0302 12:57:16.590031 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.590685 kubelet[2603]: W0302 12:57:16.590079 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.590685 kubelet[2603]: E0302 12:57:16.590122 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.591607 kubelet[2603]: E0302 12:57:16.591007 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.591607 kubelet[2603]: W0302 12:57:16.591060 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.591607 kubelet[2603]: E0302 12:57:16.591086 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.592579 kubelet[2603]: E0302 12:57:16.592093 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.592579 kubelet[2603]: W0302 12:57:16.592405 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.593260 kubelet[2603]: E0302 12:57:16.592930 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.596853 kubelet[2603]: E0302 12:57:16.596531 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.596853 kubelet[2603]: W0302 12:57:16.596555 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.596853 kubelet[2603]: E0302 12:57:16.596578 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.599195 kubelet[2603]: E0302 12:57:16.598802 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.599195 kubelet[2603]: W0302 12:57:16.598824 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.599195 kubelet[2603]: E0302 12:57:16.598846 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.599532 kubelet[2603]: E0302 12:57:16.599511 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.599608 kubelet[2603]: W0302 12:57:16.599591 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.599679 kubelet[2603]: E0302 12:57:16.599664 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.601635 kubelet[2603]: E0302 12:57:16.601525 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.601635 kubelet[2603]: W0302 12:57:16.601575 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.601635 kubelet[2603]: E0302 12:57:16.601602 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.602056 kubelet[2603]: E0302 12:57:16.601996 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.602056 kubelet[2603]: W0302 12:57:16.602047 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.602152 kubelet[2603]: E0302 12:57:16.602067 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.602900 kubelet[2603]: E0302 12:57:16.602542 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.602900 kubelet[2603]: W0302 12:57:16.602558 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.602900 kubelet[2603]: E0302 12:57:16.602574 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.611152 kubelet[2603]: E0302 12:57:16.606508 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.611152 kubelet[2603]: W0302 12:57:16.606527 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.611152 kubelet[2603]: E0302 12:57:16.606550 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.612951 containerd[1470]: time="2026-03-02T12:57:16.602502665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:57:16.612951 containerd[1470]: time="2026-03-02T12:57:16.602645130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:57:16.612951 containerd[1470]: time="2026-03-02T12:57:16.602665918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:16.612951 containerd[1470]: time="2026-03-02T12:57:16.602807651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:16.617243 kubelet[2603]: E0302 12:57:16.616543 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.617243 kubelet[2603]: W0302 12:57:16.616573 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.617243 kubelet[2603]: E0302 12:57:16.616605 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.642503 kubelet[2603]: E0302 12:57:16.637466 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.642503 kubelet[2603]: W0302 12:57:16.637540 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.642503 kubelet[2603]: E0302 12:57:16.637581 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.642503 kubelet[2603]: E0302 12:57:16.639972 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.642503 kubelet[2603]: W0302 12:57:16.640006 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.642503 kubelet[2603]: E0302 12:57:16.640043 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.682589 systemd[1]: Started cri-containerd-471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508.scope - libcontainer container 471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508. Mar 2 12:57:16.765502 kubelet[2603]: E0302 12:57:16.764784 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:16.765502 kubelet[2603]: W0302 12:57:16.764852 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:16.765502 kubelet[2603]: E0302 12:57:16.764890 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:16.865392 containerd[1470]: time="2026-03-02T12:57:16.864603514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f58595dbc-z5db2,Uid:aa7c9079-a15a-4cef-84dc-999e267f3e80,Namespace:calico-system,Attempt:0,} returns sandbox id \"44b797bc1353fd79e341a431346bbdfb971d1ed8cc872c81730057b580edad7d\"" Mar 2 12:57:16.881741 containerd[1470]: time="2026-03-02T12:57:16.881119864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-grbfj,Uid:4ae83cb2-66af-47ec-873c-ec41c190460c,Namespace:calico-system,Attempt:0,} returns sandbox id \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\"" Mar 2 12:57:16.884152 kubelet[2603]: E0302 12:57:16.882048 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:16.886763 containerd[1470]: time="2026-03-02T12:57:16.886716648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 12:57:17.618762 kubelet[2603]: E0302 12:57:17.617859 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:18.086033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217787565.mount: Deactivated successfully. Mar 2 12:57:19.629938 kubelet[2603]: E0302 12:57:19.629561 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:21.618992 kubelet[2603]: E0302 12:57:21.618698 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:22.595972 containerd[1470]: time="2026-03-02T12:57:22.593893520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:22.600048 containerd[1470]: time="2026-03-02T12:57:22.599868200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=36094696" Mar 2 12:57:22.603266 containerd[1470]: time="2026-03-02T12:57:22.602557958Z" level=info msg="ImageCreate event name:\"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:22.607704 containerd[1470]: time="2026-03-02T12:57:22.607533147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:22.609741 containerd[1470]: time="2026-03-02T12:57:22.608422618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"36094550\" in 5.721411244s" Mar 2 12:57:22.609741 containerd[1470]: time="2026-03-02T12:57:22.609015868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\"" Mar 2 12:57:22.611669 containerd[1470]: time="2026-03-02T12:57:22.611261223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 12:57:22.678046 containerd[1470]: time="2026-03-02T12:57:22.676972994Z" level=info msg="CreateContainer within sandbox \"44b797bc1353fd79e341a431346bbdfb971d1ed8cc872c81730057b580edad7d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 12:57:22.712199 containerd[1470]: time="2026-03-02T12:57:22.711177165Z" level=info msg="CreateContainer within sandbox \"44b797bc1353fd79e341a431346bbdfb971d1ed8cc872c81730057b580edad7d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7ed62618c625c6c251b81c37b102d7bde44a941c7fd6921e14e26cbd45b397de\"" Mar 2 12:57:22.713731 containerd[1470]: time="2026-03-02T12:57:22.713622952Z" level=info msg="StartContainer for \"7ed62618c625c6c251b81c37b102d7bde44a941c7fd6921e14e26cbd45b397de\"" Mar 2 12:57:22.794611 systemd[1]: Started cri-containerd-7ed62618c625c6c251b81c37b102d7bde44a941c7fd6921e14e26cbd45b397de.scope - libcontainer container 7ed62618c625c6c251b81c37b102d7bde44a941c7fd6921e14e26cbd45b397de. Mar 2 12:57:22.916483 containerd[1470]: time="2026-03-02T12:57:22.915777785Z" level=info msg="StartContainer for \"7ed62618c625c6c251b81c37b102d7bde44a941c7fd6921e14e26cbd45b397de\" returns successfully" Mar 2 12:57:23.203346 kubelet[2603]: E0302 12:57:23.200081 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:23.271257 kubelet[2603]: E0302 12:57:23.270799 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.271257 kubelet[2603]: W0302 12:57:23.270863 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.271257 kubelet[2603]: E0302 12:57:23.270899 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.272261 kubelet[2603]: E0302 12:57:23.272216 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.272261 kubelet[2603]: W0302 12:57:23.272254 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.272261 kubelet[2603]: E0302 12:57:23.272330 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.274371 kubelet[2603]: E0302 12:57:23.273169 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.274371 kubelet[2603]: W0302 12:57:23.273192 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.274371 kubelet[2603]: E0302 12:57:23.273213 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.275102 kubelet[2603]: E0302 12:57:23.274981 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.275102 kubelet[2603]: W0302 12:57:23.275029 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.275102 kubelet[2603]: E0302 12:57:23.275052 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.278851 kubelet[2603]: E0302 12:57:23.278751 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.278851 kubelet[2603]: W0302 12:57:23.278805 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.278851 kubelet[2603]: E0302 12:57:23.278831 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.281913 kubelet[2603]: E0302 12:57:23.281613 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.281913 kubelet[2603]: W0302 12:57:23.281636 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.281913 kubelet[2603]: E0302 12:57:23.281659 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.282536 kubelet[2603]: E0302 12:57:23.282101 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.282536 kubelet[2603]: W0302 12:57:23.282159 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.282536 kubelet[2603]: E0302 12:57:23.282182 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.285336 kubelet[2603]: E0302 12:57:23.284689 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.285336 kubelet[2603]: W0302 12:57:23.284714 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.285336 kubelet[2603]: E0302 12:57:23.284736 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.286609 kubelet[2603]: E0302 12:57:23.286557 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.286609 kubelet[2603]: W0302 12:57:23.286608 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.286801 kubelet[2603]: E0302 12:57:23.286629 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.287916 kubelet[2603]: E0302 12:57:23.287812 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.287916 kubelet[2603]: W0302 12:57:23.287883 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.287916 kubelet[2603]: E0302 12:57:23.287907 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.293431 kubelet[2603]: E0302 12:57:23.293381 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.293431 kubelet[2603]: W0302 12:57:23.293415 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.293610 kubelet[2603]: E0302 12:57:23.293445 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.294024 kubelet[2603]: E0302 12:57:23.293957 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.294024 kubelet[2603]: W0302 12:57:23.294013 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.294186 kubelet[2603]: E0302 12:57:23.294036 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.297906 kubelet[2603]: E0302 12:57:23.297837 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.297906 kubelet[2603]: W0302 12:57:23.297900 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.298067 kubelet[2603]: E0302 12:57:23.297931 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.299369 kubelet[2603]: E0302 12:57:23.299227 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.299509 kubelet[2603]: W0302 12:57:23.299273 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.299509 kubelet[2603]: E0302 12:57:23.299487 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.300739 kubelet[2603]: E0302 12:57:23.300700 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.300739 kubelet[2603]: W0302 12:57:23.300733 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.301070 kubelet[2603]: E0302 12:57:23.301008 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.306731 kubelet[2603]: I0302 12:57:23.306626 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f58595dbc-z5db2" podStartSLOduration=2.580066581 podStartE2EDuration="8.304761523s" podCreationTimestamp="2026-03-02 12:57:15 +0000 UTC" firstStartedPulling="2026-03-02 12:57:16.88608143 +0000 UTC m=+40.914048379" lastFinishedPulling="2026-03-02 12:57:22.610776363 +0000 UTC m=+46.638743321" observedRunningTime="2026-03-02 12:57:23.258650429 +0000 UTC m=+47.286617398" watchObservedRunningTime="2026-03-02 12:57:23.304761523 +0000 UTC m=+47.332728491" Mar 2 12:57:23.387335 kubelet[2603]: E0302 12:57:23.387230 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.387835 kubelet[2603]: W0302 12:57:23.387698 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.387835 kubelet[2603]: E0302 12:57:23.387770 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.391468 kubelet[2603]: E0302 12:57:23.388398 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.391468 kubelet[2603]: W0302 12:57:23.388441 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.391468 kubelet[2603]: E0302 12:57:23.388463 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.391468 kubelet[2603]: E0302 12:57:23.389188 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.391468 kubelet[2603]: W0302 12:57:23.389205 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.391468 kubelet[2603]: E0302 12:57:23.389224 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.391468 kubelet[2603]: E0302 12:57:23.389732 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.391468 kubelet[2603]: W0302 12:57:23.389745 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.391468 kubelet[2603]: E0302 12:57:23.389761 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.391468 kubelet[2603]: E0302 12:57:23.390182 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.392174 kubelet[2603]: W0302 12:57:23.390194 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.392174 kubelet[2603]: E0302 12:57:23.390209 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.392174 kubelet[2603]: E0302 12:57:23.390681 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.392174 kubelet[2603]: W0302 12:57:23.390694 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.392174 kubelet[2603]: E0302 12:57:23.390708 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.392174 kubelet[2603]: E0302 12:57:23.391260 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.392174 kubelet[2603]: W0302 12:57:23.391273 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.392174 kubelet[2603]: E0302 12:57:23.391362 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.395685 kubelet[2603]: E0302 12:57:23.393985 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.395685 kubelet[2603]: W0302 12:57:23.394063 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.395685 kubelet[2603]: E0302 12:57:23.394083 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.395685 kubelet[2603]: E0302 12:57:23.394694 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.395685 kubelet[2603]: W0302 12:57:23.394708 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.395685 kubelet[2603]: E0302 12:57:23.394722 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.397173 kubelet[2603]: E0302 12:57:23.397053 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.397247 kubelet[2603]: W0302 12:57:23.397190 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.397247 kubelet[2603]: E0302 12:57:23.397210 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.402475 kubelet[2603]: E0302 12:57:23.398242 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.402475 kubelet[2603]: W0302 12:57:23.402444 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.402475 kubelet[2603]: E0302 12:57:23.402478 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.404272 kubelet[2603]: E0302 12:57:23.403899 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.404272 kubelet[2603]: W0302 12:57:23.403941 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.404272 kubelet[2603]: E0302 12:57:23.403963 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.405262 kubelet[2603]: E0302 12:57:23.404857 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.405262 kubelet[2603]: W0302 12:57:23.404870 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.405262 kubelet[2603]: E0302 12:57:23.404884 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.407392 kubelet[2603]: E0302 12:57:23.407356 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.407392 kubelet[2603]: W0302 12:57:23.407386 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.407508 kubelet[2603]: E0302 12:57:23.407402 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.408665 kubelet[2603]: E0302 12:57:23.408643 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.408665 kubelet[2603]: W0302 12:57:23.408663 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.408842 kubelet[2603]: E0302 12:57:23.408679 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.409448 kubelet[2603]: E0302 12:57:23.409213 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.409448 kubelet[2603]: W0302 12:57:23.409260 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.409448 kubelet[2603]: E0302 12:57:23.409332 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.410038 kubelet[2603]: E0302 12:57:23.409822 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.410038 kubelet[2603]: W0302 12:57:23.409838 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.410038 kubelet[2603]: E0302 12:57:23.409851 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.410969 kubelet[2603]: E0302 12:57:23.410419 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:23.410969 kubelet[2603]: W0302 12:57:23.410449 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:23.410969 kubelet[2603]: E0302 12:57:23.410463 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:23.621364 kubelet[2603]: E0302 12:57:23.620154 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:24.198926 kubelet[2603]: E0302 12:57:24.197442 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:24.214569 kubelet[2603]: E0302 12:57:24.214101 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.214569 kubelet[2603]: W0302 12:57:24.214190 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.214569 kubelet[2603]: E0302 12:57:24.214218 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.216607 kubelet[2603]: E0302 12:57:24.215957 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.216607 kubelet[2603]: W0302 12:57:24.215974 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.216607 kubelet[2603]: E0302 12:57:24.216050 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.216887 kubelet[2603]: E0302 12:57:24.216825 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.216887 kubelet[2603]: W0302 12:57:24.216865 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.216997 kubelet[2603]: E0302 12:57:24.216884 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.220552 kubelet[2603]: E0302 12:57:24.220374 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.220552 kubelet[2603]: W0302 12:57:24.220412 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.220552 kubelet[2603]: E0302 12:57:24.220432 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.221262 kubelet[2603]: E0302 12:57:24.221165 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.221454 kubelet[2603]: W0302 12:57:24.221408 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.221454 kubelet[2603]: E0302 12:57:24.221431 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.222592 kubelet[2603]: E0302 12:57:24.222364 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.222991 kubelet[2603]: W0302 12:57:24.222744 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.222991 kubelet[2603]: E0302 12:57:24.222779 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.224626 kubelet[2603]: E0302 12:57:24.224468 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.224626 kubelet[2603]: W0302 12:57:24.224496 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.224626 kubelet[2603]: E0302 12:57:24.224512 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.225088 kubelet[2603]: E0302 12:57:24.225033 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.225088 kubelet[2603]: W0302 12:57:24.225063 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.225088 kubelet[2603]: E0302 12:57:24.225075 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.230045 kubelet[2603]: E0302 12:57:24.229350 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.230045 kubelet[2603]: W0302 12:57:24.229368 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.230045 kubelet[2603]: E0302 12:57:24.229385 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.230045 kubelet[2603]: E0302 12:57:24.229758 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.230045 kubelet[2603]: W0302 12:57:24.229773 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.230045 kubelet[2603]: E0302 12:57:24.229787 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.230961 kubelet[2603]: E0302 12:57:24.230917 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.230961 kubelet[2603]: W0302 12:57:24.230956 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.231095 kubelet[2603]: E0302 12:57:24.230973 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.231491 kubelet[2603]: E0302 12:57:24.231452 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.231491 kubelet[2603]: W0302 12:57:24.231484 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.231603 kubelet[2603]: E0302 12:57:24.231502 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.231888 kubelet[2603]: E0302 12:57:24.231853 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.231888 kubelet[2603]: W0302 12:57:24.231879 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.231996 kubelet[2603]: E0302 12:57:24.231892 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.232366 kubelet[2603]: E0302 12:57:24.232249 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.232366 kubelet[2603]: W0302 12:57:24.232334 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.232475 kubelet[2603]: E0302 12:57:24.232348 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.232727 kubelet[2603]: E0302 12:57:24.232675 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.232727 kubelet[2603]: W0302 12:57:24.232687 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.232727 kubelet[2603]: E0302 12:57:24.232698 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.315504 kubelet[2603]: E0302 12:57:24.314436 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.315504 kubelet[2603]: W0302 12:57:24.314470 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.315504 kubelet[2603]: E0302 12:57:24.314498 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.315504 kubelet[2603]: E0302 12:57:24.315005 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.315504 kubelet[2603]: W0302 12:57:24.315021 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.315504 kubelet[2603]: E0302 12:57:24.315037 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.317184 kubelet[2603]: E0302 12:57:24.317165 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.317272 kubelet[2603]: W0302 12:57:24.317255 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.317408 kubelet[2603]: E0302 12:57:24.317392 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.317986 kubelet[2603]: E0302 12:57:24.317969 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.318093 kubelet[2603]: W0302 12:57:24.318077 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.318411 kubelet[2603]: E0302 12:57:24.318213 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.318827 kubelet[2603]: E0302 12:57:24.318811 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.319011 kubelet[2603]: W0302 12:57:24.318890 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.319011 kubelet[2603]: E0302 12:57:24.318907 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.321203 kubelet[2603]: E0302 12:57:24.320860 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.321203 kubelet[2603]: W0302 12:57:24.320903 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.321203 kubelet[2603]: E0302 12:57:24.320920 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.321634 kubelet[2603]: E0302 12:57:24.321568 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.321634 kubelet[2603]: W0302 12:57:24.321616 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.321634 kubelet[2603]: E0302 12:57:24.321635 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.322815 kubelet[2603]: E0302 12:57:24.321999 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.322815 kubelet[2603]: W0302 12:57:24.322034 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.322815 kubelet[2603]: E0302 12:57:24.322047 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.323838 kubelet[2603]: E0302 12:57:24.323064 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.323838 kubelet[2603]: W0302 12:57:24.323078 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.323838 kubelet[2603]: E0302 12:57:24.323091 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.323838 kubelet[2603]: E0302 12:57:24.323513 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.323838 kubelet[2603]: W0302 12:57:24.323526 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.323838 kubelet[2603]: E0302 12:57:24.323539 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.325197 kubelet[2603]: E0302 12:57:24.324557 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.325197 kubelet[2603]: W0302 12:57:24.324577 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.325197 kubelet[2603]: E0302 12:57:24.324641 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.325407 kubelet[2603]: E0302 12:57:24.325350 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.325407 kubelet[2603]: W0302 12:57:24.325362 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.325407 kubelet[2603]: E0302 12:57:24.325376 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.328413 containerd[1470]: time="2026-03-02T12:57:24.325637273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:24.328967 kubelet[2603]: E0302 12:57:24.325689 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.328967 kubelet[2603]: W0302 12:57:24.325700 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.328967 kubelet[2603]: E0302 12:57:24.325713 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.328967 kubelet[2603]: E0302 12:57:24.326396 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.328967 kubelet[2603]: W0302 12:57:24.326410 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.328967 kubelet[2603]: E0302 12:57:24.326502 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.328967 kubelet[2603]: E0302 12:57:24.327053 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.328967 kubelet[2603]: W0302 12:57:24.327171 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.328967 kubelet[2603]: E0302 12:57:24.327191 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.328967 kubelet[2603]: E0302 12:57:24.328221 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.332528 kubelet[2603]: W0302 12:57:24.328234 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.332528 kubelet[2603]: E0302 12:57:24.328382 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.333513 kubelet[2603]: E0302 12:57:24.333433 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.333513 kubelet[2603]: W0302 12:57:24.333501 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.333594 kubelet[2603]: E0302 12:57:24.333534 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.335797 containerd[1470]: time="2026-03-02T12:57:24.335723991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4630152" Mar 2 12:57:24.341244 kubelet[2603]: E0302 12:57:24.338559 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:24.341244 kubelet[2603]: W0302 12:57:24.338602 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:24.341244 kubelet[2603]: E0302 12:57:24.338631 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:24.347367 containerd[1470]: time="2026-03-02T12:57:24.346483155Z" level=info msg="ImageCreate event name:\"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:24.352238 containerd[1470]: time="2026-03-02T12:57:24.351238295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:24.353443 containerd[1470]: time="2026-03-02T12:57:24.352887583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"6186157\" in 1.741480771s" Mar 2 12:57:24.353443 containerd[1470]: time="2026-03-02T12:57:24.352940281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\"" Mar 2 12:57:24.365035 containerd[1470]: time="2026-03-02T12:57:24.364908940Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 12:57:24.457840 containerd[1470]: time="2026-03-02T12:57:24.456065877Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8\"" Mar 2 12:57:24.460841 containerd[1470]: time="2026-03-02T12:57:24.460591432Z" level=info msg="StartContainer for \"fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8\"" Mar 2 12:57:24.554791 systemd[1]: Started cri-containerd-fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8.scope - libcontainer container fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8. Mar 2 12:57:24.684498 containerd[1470]: time="2026-03-02T12:57:24.678591857Z" level=info msg="StartContainer for \"fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8\" returns successfully" Mar 2 12:57:24.756744 systemd[1]: cri-containerd-fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8.scope: Deactivated successfully. Mar 2 12:57:24.857435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8-rootfs.mount: Deactivated successfully. Mar 2 12:57:25.056422 containerd[1470]: time="2026-03-02T12:57:25.055182931Z" level=info msg="shim disconnected" id=fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8 namespace=k8s.io Mar 2 12:57:25.056422 containerd[1470]: time="2026-03-02T12:57:25.055338208Z" level=warning msg="cleaning up after shim disconnected" id=fce666faed941bc79f740441cb73c91c9a8999fc94903fb9e8484e09b27b91f8 namespace=k8s.io Mar 2 12:57:25.056422 containerd[1470]: time="2026-03-02T12:57:25.055356723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:57:25.103238 containerd[1470]: time="2026-03-02T12:57:25.102923985Z" level=warning msg="cleanup warnings time=\"2026-03-02T12:57:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 12:57:25.207695 kubelet[2603]: E0302 12:57:25.206949 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:25.215746 containerd[1470]: time="2026-03-02T12:57:25.212585340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 12:57:25.619009 kubelet[2603]: E0302 12:57:25.618518 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:27.642578 kubelet[2603]: E0302 12:57:27.627954 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:29.640872 kubelet[2603]: E0302 12:57:29.639877 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:36.171563 kubelet[2603]: E0302 12:57:36.171125 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:36.188398 kubelet[2603]: E0302 12:57:36.186325 2603 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.296s" Mar 2 12:57:37.624997 kubelet[2603]: E0302 12:57:37.623900 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:39.638435 kubelet[2603]: E0302 12:57:39.619267 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:41.620946 kubelet[2603]: E0302 12:57:41.619640 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:43.637814 kubelet[2603]: E0302 12:57:43.619455 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:45.632729 kubelet[2603]: E0302 12:57:45.631273 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:47.647970 kubelet[2603]: E0302 12:57:47.625731 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:49.622901 kubelet[2603]: E0302 12:57:49.621539 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:51.641072 kubelet[2603]: E0302 12:57:51.640931 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:53.619058 kubelet[2603]: E0302 12:57:53.618149 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:55.618031 kubelet[2603]: E0302 12:57:55.617641 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:56.069204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369607109.mount: Deactivated successfully. Mar 2 12:57:56.254102 containerd[1470]: time="2026-03-02T12:57:56.252710743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:56.256240 containerd[1470]: time="2026-03-02T12:57:56.255564502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=159483365" Mar 2 12:57:56.264165 containerd[1470]: time="2026-03-02T12:57:56.260524362Z" level=info msg="ImageCreate event name:\"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:56.270517 containerd[1470]: time="2026-03-02T12:57:56.270395099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:56.271698 containerd[1470]: time="2026-03-02T12:57:56.271581845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"159483227\" in 31.058941273s" Mar 2 12:57:56.271698 containerd[1470]: time="2026-03-02T12:57:56.271666763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\"" Mar 2 12:57:56.295143 containerd[1470]: time="2026-03-02T12:57:56.293081920Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 12:57:56.810250 containerd[1470]: time="2026-03-02T12:57:56.810187428Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe\"" Mar 2 12:57:56.817523 containerd[1470]: time="2026-03-02T12:57:56.814115139Z" level=info msg="StartContainer for \"120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe\"" Mar 2 12:57:57.214787 systemd[1]: Started cri-containerd-120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe.scope - libcontainer container 120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe. Mar 2 12:57:57.989714 kubelet[2603]: E0302 12:57:57.986807 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:57:58.212620 containerd[1470]: time="2026-03-02T12:57:58.212492367Z" level=info msg="StartContainer for \"120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe\" returns successfully" Mar 2 12:57:58.404888 systemd[1]: cri-containerd-120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe.scope: Deactivated successfully. Mar 2 12:57:58.604825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe-rootfs.mount: Deactivated successfully. Mar 2 12:57:59.236510 containerd[1470]: time="2026-03-02T12:57:59.234901510Z" level=info msg="shim disconnected" id=120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe namespace=k8s.io Mar 2 12:57:59.236510 containerd[1470]: time="2026-03-02T12:57:59.235254536Z" level=warning msg="cleaning up after shim disconnected" id=120ac4a5a74edbef9db9d122e48779075a91c6799499087065f1f069cb6d6abe namespace=k8s.io Mar 2 12:57:59.236510 containerd[1470]: time="2026-03-02T12:57:59.235271347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:57:59.628373 kubelet[2603]: E0302 12:57:59.626579 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:59.628373 kubelet[2603]: E0302 12:57:59.626868 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:00.146610 containerd[1470]: time="2026-03-02T12:58:00.146556656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 12:58:01.617929 kubelet[2603]: E0302 12:58:01.617807 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:03.625837 kubelet[2603]: E0302 12:58:03.625749 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:05.619598 kubelet[2603]: E0302 12:58:05.619483 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:05.637768 kubelet[2603]: E0302 12:58:05.620937 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:07.681720 kubelet[2603]: E0302 12:58:07.676799 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:07.700161 kubelet[2603]: E0302 12:58:07.686346 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:09.620708 kubelet[2603]: E0302 12:58:09.619110 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:11.626926 kubelet[2603]: E0302 12:58:11.626678 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:13.617867 kubelet[2603]: E0302 12:58:13.617780 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:14.418936 containerd[1470]: time="2026-03-02T12:58:14.414509608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:14.431403 containerd[1470]: time="2026-03-02T12:58:14.431163804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=70584418" Mar 2 12:58:14.435132 containerd[1470]: time="2026-03-02T12:58:14.435087222Z" level=info msg="ImageCreate event name:\"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:14.762173 containerd[1470]: time="2026-03-02T12:58:14.762117977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"72140463\" in 14.615164905s" Mar 2 12:58:14.767004 containerd[1470]: time="2026-03-02T12:58:14.766906284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\"" Mar 2 12:58:14.769231 containerd[1470]: time="2026-03-02T12:58:14.768892681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:14.864457 containerd[1470]: time="2026-03-02T12:58:14.864108395Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 12:58:15.207027 containerd[1470]: time="2026-03-02T12:58:15.206275475Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207\"" Mar 2 12:58:15.225485 containerd[1470]: time="2026-03-02T12:58:15.219494557Z" level=info msg="StartContainer for \"73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207\"" Mar 2 12:58:15.491244 systemd[1]: Started cri-containerd-73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207.scope - libcontainer container 73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207. Mar 2 12:58:15.641393 kubelet[2603]: E0302 12:58:15.638861 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:15.757550 containerd[1470]: time="2026-03-02T12:58:15.757231909Z" level=info msg="StartContainer for \"73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207\" returns successfully" Mar 2 12:58:17.651385 kubelet[2603]: E0302 12:58:17.646134 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:17.662665 kubelet[2603]: E0302 12:58:17.662259 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:19.765081 kubelet[2603]: E0302 12:58:19.755997 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:19.884723 systemd[1]: cri-containerd-73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207.scope: Deactivated successfully. Mar 2 12:58:19.886064 systemd[1]: cri-containerd-73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207.scope: Consumed 2.329s CPU time. Mar 2 12:58:19.936166 kubelet[2603]: I0302 12:58:19.933582 2603 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 2 12:58:19.988902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207-rootfs.mount: Deactivated successfully. Mar 2 12:58:20.063142 containerd[1470]: time="2026-03-02T12:58:20.062787894Z" level=info msg="shim disconnected" id=73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207 namespace=k8s.io Mar 2 12:58:20.063142 containerd[1470]: time="2026-03-02T12:58:20.062899080Z" level=warning msg="cleaning up after shim disconnected" id=73a69ec60145b2b41ac67cc72947f484740e46f50466cee06ba7e619ab67a207 namespace=k8s.io Mar 2 12:58:20.063142 containerd[1470]: time="2026-03-02T12:58:20.062951398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:58:20.294732 systemd[1]: Created slice kubepods-besteffort-pod7cb3f66b_acf0_4b43_aaea_8ad73a2971a4.slice - libcontainer container kubepods-besteffort-pod7cb3f66b_acf0_4b43_aaea_8ad73a2971a4.slice. Mar 2 12:58:20.360421 kubelet[2603]: I0302 12:58:20.359621 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kcj9\" (UniqueName: \"kubernetes.io/projected/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-kube-api-access-2kcj9\") pod \"whisker-7bbc66965c-97zk4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " pod="calico-system/whisker-7bbc66965c-97zk4" Mar 2 12:58:20.360421 kubelet[2603]: I0302 12:58:20.359719 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-nginx-config\") pod \"whisker-7bbc66965c-97zk4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " pod="calico-system/whisker-7bbc66965c-97zk4" Mar 2 12:58:20.360421 kubelet[2603]: I0302 12:58:20.359747 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-ca-bundle\") pod \"whisker-7bbc66965c-97zk4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " pod="calico-system/whisker-7bbc66965c-97zk4" Mar 2 12:58:20.360421 kubelet[2603]: I0302 12:58:20.359771 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzprv\" (UniqueName: \"kubernetes.io/projected/83de2541-d5e5-4ff8-9f13-b519d8c21fab-kube-api-access-qzprv\") pod \"calico-apiserver-6778944f75-frr9d\" (UID: \"83de2541-d5e5-4ff8-9f13-b519d8c21fab\") " pod="calico-system/calico-apiserver-6778944f75-frr9d" Mar 2 12:58:20.360421 kubelet[2603]: I0302 12:58:20.359801 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/83de2541-d5e5-4ff8-9f13-b519d8c21fab-calico-apiserver-certs\") pod \"calico-apiserver-6778944f75-frr9d\" (UID: \"83de2541-d5e5-4ff8-9f13-b519d8c21fab\") " pod="calico-system/calico-apiserver-6778944f75-frr9d" Mar 2 12:58:20.361528 kubelet[2603]: I0302 12:58:20.359828 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-backend-key-pair\") pod \"whisker-7bbc66965c-97zk4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " pod="calico-system/whisker-7bbc66965c-97zk4" Mar 2 12:58:20.377389 systemd[1]: Created slice kubepods-besteffort-pod83de2541_d5e5_4ff8_9f13_b519d8c21fab.slice - libcontainer container kubepods-besteffort-pod83de2541_d5e5_4ff8_9f13_b519d8c21fab.slice. Mar 2 12:58:20.405210 systemd[1]: Created slice kubepods-besteffort-pode94766e6_0a94_44da_adf7_e646b07431ce.slice - libcontainer container kubepods-besteffort-pode94766e6_0a94_44da_adf7_e646b07431ce.slice. Mar 2 12:58:20.425569 systemd[1]: Created slice kubepods-burstable-poda17f4e37_d122_4cb7_a268_77a8710e322d.slice - libcontainer container kubepods-burstable-poda17f4e37_d122_4cb7_a268_77a8710e322d.slice. Mar 2 12:58:20.448811 systemd[1]: Created slice kubepods-burstable-pod82df1e0b_93ca_4d37_95cc_8fb21c222566.slice - libcontainer container kubepods-burstable-pod82df1e0b_93ca_4d37_95cc_8fb21c222566.slice. Mar 2 12:58:20.462658 kubelet[2603]: I0302 12:58:20.461976 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17f4e37-d122-4cb7-a268-77a8710e322d-config-volume\") pod \"coredns-66bc5c9577-4pk99\" (UID: \"a17f4e37-d122-4cb7-a268-77a8710e322d\") " pod="kube-system/coredns-66bc5c9577-4pk99" Mar 2 12:58:20.462658 kubelet[2603]: I0302 12:58:20.462045 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94766e6-0a94-44da-adf7-e646b07431ce-config\") pod \"goldmane-54d7f6b6d6-x86ql\" (UID: \"e94766e6-0a94-44da-adf7-e646b07431ce\") " pod="calico-system/goldmane-54d7f6b6d6-x86ql" Mar 2 12:58:20.462658 kubelet[2603]: I0302 12:58:20.462134 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e246d66-0a6c-4a48-969d-096fb291a66e-calico-apiserver-certs\") pod \"calico-apiserver-6778944f75-mnf89\" (UID: \"9e246d66-0a6c-4a48-969d-096fb291a66e\") " pod="calico-system/calico-apiserver-6778944f75-mnf89" Mar 2 12:58:20.462658 kubelet[2603]: I0302 12:58:20.462158 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtqxz\" (UniqueName: \"kubernetes.io/projected/9e246d66-0a6c-4a48-969d-096fb291a66e-kube-api-access-rtqxz\") pod \"calico-apiserver-6778944f75-mnf89\" (UID: \"9e246d66-0a6c-4a48-969d-096fb291a66e\") " pod="calico-system/calico-apiserver-6778944f75-mnf89" Mar 2 12:58:20.462658 kubelet[2603]: I0302 12:58:20.462177 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb7eb2f6-04fe-4439-a025-db21095b18ae-tigera-ca-bundle\") pod \"calico-kube-controllers-867c96dbbc-vh6px\" (UID: \"eb7eb2f6-04fe-4439-a025-db21095b18ae\") " pod="calico-system/calico-kube-controllers-867c96dbbc-vh6px" Mar 2 12:58:20.463703 kubelet[2603]: I0302 12:58:20.462203 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cwwg\" (UniqueName: \"kubernetes.io/projected/82df1e0b-93ca-4d37-95cc-8fb21c222566-kube-api-access-7cwwg\") pod \"coredns-66bc5c9577-lt57c\" (UID: \"82df1e0b-93ca-4d37-95cc-8fb21c222566\") " pod="kube-system/coredns-66bc5c9577-lt57c" Mar 2 12:58:20.463703 kubelet[2603]: I0302 12:58:20.462348 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82df1e0b-93ca-4d37-95cc-8fb21c222566-config-volume\") pod \"coredns-66bc5c9577-lt57c\" (UID: \"82df1e0b-93ca-4d37-95cc-8fb21c222566\") " pod="kube-system/coredns-66bc5c9577-lt57c" Mar 2 12:58:20.463703 kubelet[2603]: I0302 12:58:20.462375 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94766e6-0a94-44da-adf7-e646b07431ce-goldmane-ca-bundle\") pod \"goldmane-54d7f6b6d6-x86ql\" (UID: \"e94766e6-0a94-44da-adf7-e646b07431ce\") " pod="calico-system/goldmane-54d7f6b6d6-x86ql" Mar 2 12:58:20.463703 kubelet[2603]: I0302 12:58:20.462397 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e94766e6-0a94-44da-adf7-e646b07431ce-goldmane-key-pair\") pod \"goldmane-54d7f6b6d6-x86ql\" (UID: \"e94766e6-0a94-44da-adf7-e646b07431ce\") " pod="calico-system/goldmane-54d7f6b6d6-x86ql" Mar 2 12:58:20.463703 kubelet[2603]: I0302 12:58:20.462418 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6xw\" (UniqueName: \"kubernetes.io/projected/e94766e6-0a94-44da-adf7-e646b07431ce-kube-api-access-4s6xw\") pod \"goldmane-54d7f6b6d6-x86ql\" (UID: \"e94766e6-0a94-44da-adf7-e646b07431ce\") " pod="calico-system/goldmane-54d7f6b6d6-x86ql" Mar 2 12:58:20.464037 kubelet[2603]: I0302 12:58:20.462438 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr6wr\" (UniqueName: \"kubernetes.io/projected/eb7eb2f6-04fe-4439-a025-db21095b18ae-kube-api-access-nr6wr\") pod \"calico-kube-controllers-867c96dbbc-vh6px\" (UID: \"eb7eb2f6-04fe-4439-a025-db21095b18ae\") " pod="calico-system/calico-kube-controllers-867c96dbbc-vh6px" Mar 2 12:58:20.464037 kubelet[2603]: I0302 12:58:20.462458 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dlp\" (UniqueName: \"kubernetes.io/projected/a17f4e37-d122-4cb7-a268-77a8710e322d-kube-api-access-n4dlp\") pod \"coredns-66bc5c9577-4pk99\" (UID: \"a17f4e37-d122-4cb7-a268-77a8710e322d\") " pod="kube-system/coredns-66bc5c9577-4pk99" Mar 2 12:58:20.483033 systemd[1]: Created slice kubepods-besteffort-podeb7eb2f6_04fe_4439_a025_db21095b18ae.slice - libcontainer container kubepods-besteffort-podeb7eb2f6_04fe_4439_a025_db21095b18ae.slice. Mar 2 12:58:20.669494 systemd[1]: Created slice kubepods-besteffort-pod9e246d66_0a6c_4a48_969d_096fb291a66e.slice - libcontainer container kubepods-besteffort-pod9e246d66_0a6c_4a48_969d_096fb291a66e.slice. Mar 2 12:58:20.746547 kubelet[2603]: E0302 12:58:20.746417 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:20.749661 containerd[1470]: time="2026-03-02T12:58:20.748888727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4pk99,Uid:a17f4e37-d122-4cb7-a268-77a8710e322d,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:20.765268 containerd[1470]: time="2026-03-02T12:58:20.763533523Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 12:58:20.786019 kubelet[2603]: E0302 12:58:20.785349 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:20.786799 containerd[1470]: time="2026-03-02T12:58:20.786394992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lt57c,Uid:82df1e0b-93ca-4d37-95cc-8fb21c222566,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:20.878769 containerd[1470]: time="2026-03-02T12:58:20.878584136Z" level=info msg="CreateContainer within sandbox \"471b9da99283e2d72ddea26eabab8c3b03fc10b5b2b39f1d92251c4098d88508\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"09f2dc000149d4c8298894d4390f734fabd5baa461ebe54b97bc2a0b46628710\"" Mar 2 12:58:20.881376 containerd[1470]: time="2026-03-02T12:58:20.880447503Z" level=info msg="StartContainer for \"09f2dc000149d4c8298894d4390f734fabd5baa461ebe54b97bc2a0b46628710\"" Mar 2 12:58:20.946079 containerd[1470]: time="2026-03-02T12:58:20.939022689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbc66965c-97zk4,Uid:7cb3f66b-acf0-4b43-aaea-8ad73a2971a4,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:21.054372 containerd[1470]: time="2026-03-02T12:58:21.054257363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-mnf89,Uid:9e246d66-0a6c-4a48-969d-096fb291a66e,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:21.059348 containerd[1470]: time="2026-03-02T12:58:21.056508182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-867c96dbbc-vh6px,Uid:eb7eb2f6-04fe-4439-a025-db21095b18ae,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:21.080135 containerd[1470]: time="2026-03-02T12:58:21.079674807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-frr9d,Uid:83de2541-d5e5-4ff8-9f13-b519d8c21fab,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:21.109978 containerd[1470]: time="2026-03-02T12:58:21.109373330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-x86ql,Uid:e94766e6-0a94-44da-adf7-e646b07431ce,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:21.151469 systemd[1]: Started cri-containerd-09f2dc000149d4c8298894d4390f734fabd5baa461ebe54b97bc2a0b46628710.scope - libcontainer container 09f2dc000149d4c8298894d4390f734fabd5baa461ebe54b97bc2a0b46628710. Mar 2 12:58:21.612849 containerd[1470]: time="2026-03-02T12:58:21.612108063Z" level=info msg="StartContainer for \"09f2dc000149d4c8298894d4390f734fabd5baa461ebe54b97bc2a0b46628710\" returns successfully" Mar 2 12:58:21.639479 systemd[1]: Created slice kubepods-besteffort-pod6027383d_96c9_465b_88ba_00723209fa19.slice - libcontainer container kubepods-besteffort-pod6027383d_96c9_465b_88ba_00723209fa19.slice. Mar 2 12:58:21.652463 containerd[1470]: time="2026-03-02T12:58:21.652408300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktfwf,Uid:6027383d-96c9-465b-88ba-00723209fa19,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:21.860463 containerd[1470]: time="2026-03-02T12:58:21.860271723Z" level=error msg="Failed to destroy network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:21.861843 containerd[1470]: time="2026-03-02T12:58:21.861806799Z" level=error msg="encountered an error cleaning up failed sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:21.862332 containerd[1470]: time="2026-03-02T12:58:21.862177740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lt57c,Uid:82df1e0b-93ca-4d37-95cc-8fb21c222566,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:21.862442 kubelet[2603]: I0302 12:58:21.862154 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-grbfj" podStartSLOduration=8.967908188 podStartE2EDuration="1m6.862030161s" podCreationTimestamp="2026-03-02 12:57:15 +0000 UTC" firstStartedPulling="2026-03-02 12:57:16.887656169 +0000 UTC m=+40.915623117" lastFinishedPulling="2026-03-02 12:58:14.781778142 +0000 UTC m=+98.809745090" observedRunningTime="2026-03-02 12:58:21.851913306 +0000 UTC m=+105.879880275" watchObservedRunningTime="2026-03-02 12:58:21.862030161 +0000 UTC m=+105.889997119" Mar 2 12:58:21.884804 kubelet[2603]: E0302 12:58:21.877414 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:21.892583 kubelet[2603]: E0302 12:58:21.892238 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lt57c" Mar 2 12:58:21.892583 kubelet[2603]: E0302 12:58:21.892535 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lt57c" Mar 2 12:58:21.894606 kubelet[2603]: E0302 12:58:21.894363 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lt57c_kube-system(82df1e0b-93ca-4d37-95cc-8fb21c222566)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lt57c_kube-system(82df1e0b-93ca-4d37-95cc-8fb21c222566)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lt57c" podUID="82df1e0b-93ca-4d37-95cc-8fb21c222566" Mar 2 12:58:22.007509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743-shm.mount: Deactivated successfully. Mar 2 12:58:22.073257 containerd[1470]: time="2026-03-02T12:58:22.071415583Z" level=error msg="Failed to destroy network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.079127 containerd[1470]: time="2026-03-02T12:58:22.078419090Z" level=error msg="encountered an error cleaning up failed sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.082242 containerd[1470]: time="2026-03-02T12:58:22.080510573Z" level=error msg="Failed to destroy network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.082242 containerd[1470]: time="2026-03-02T12:58:22.081377677Z" level=error msg="encountered an error cleaning up failed sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.082242 containerd[1470]: time="2026-03-02T12:58:22.081452325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-mnf89,Uid:9e246d66-0a6c-4a48-969d-096fb291a66e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.083123 kubelet[2603]: E0302 12:58:22.081833 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.083123 kubelet[2603]: E0302 12:58:22.081960 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6778944f75-mnf89" Mar 2 12:58:22.083123 kubelet[2603]: E0302 12:58:22.081990 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6778944f75-mnf89" Mar 2 12:58:22.083371 kubelet[2603]: E0302 12:58:22.082065 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6778944f75-mnf89_calico-system(9e246d66-0a6c-4a48-969d-096fb291a66e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6778944f75-mnf89_calico-system(9e246d66-0a6c-4a48-969d-096fb291a66e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6778944f75-mnf89" podUID="9e246d66-0a6c-4a48-969d-096fb291a66e" Mar 2 12:58:22.085404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0-shm.mount: Deactivated successfully. Mar 2 12:58:22.087005 containerd[1470]: time="2026-03-02T12:58:22.086611684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4pk99,Uid:a17f4e37-d122-4cb7-a268-77a8710e322d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.088268 kubelet[2603]: E0302 12:58:22.087990 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.088268 kubelet[2603]: E0302 12:58:22.088101 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4pk99" Mar 2 12:58:22.088268 kubelet[2603]: E0302 12:58:22.088129 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4pk99" Mar 2 12:58:22.088558 kubelet[2603]: E0302 12:58:22.088491 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4pk99_kube-system(a17f4e37-d122-4cb7-a268-77a8710e322d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4pk99_kube-system(a17f4e37-d122-4cb7-a268-77a8710e322d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4pk99" podUID="a17f4e37-d122-4cb7-a268-77a8710e322d" Mar 2 12:58:22.102184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca-shm.mount: Deactivated successfully. Mar 2 12:58:22.212048 containerd[1470]: time="2026-03-02T12:58:22.209256557Z" level=error msg="Failed to destroy network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.220096 containerd[1470]: time="2026-03-02T12:58:22.219792652Z" level=error msg="encountered an error cleaning up failed sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.223635 containerd[1470]: time="2026-03-02T12:58:22.223580280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-frr9d,Uid:83de2541-d5e5-4ff8-9f13-b519d8c21fab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.226676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1-shm.mount: Deactivated successfully. Mar 2 12:58:22.253598 kubelet[2603]: E0302 12:58:22.228191 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.253598 kubelet[2603]: E0302 12:58:22.234097 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6778944f75-frr9d" Mar 2 12:58:22.253598 kubelet[2603]: E0302 12:58:22.234127 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6778944f75-frr9d" Mar 2 12:58:22.256438 kubelet[2603]: E0302 12:58:22.246785 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6778944f75-frr9d_calico-system(83de2541-d5e5-4ff8-9f13-b519d8c21fab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6778944f75-frr9d_calico-system(83de2541-d5e5-4ff8-9f13-b519d8c21fab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6778944f75-frr9d" podUID="83de2541-d5e5-4ff8-9f13-b519d8c21fab" Mar 2 12:58:22.259992 containerd[1470]: time="2026-03-02T12:58:22.258680993Z" level=error msg="Failed to destroy network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.262239 containerd[1470]: time="2026-03-02T12:58:22.262125813Z" level=error msg="encountered an error cleaning up failed sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.262485 containerd[1470]: time="2026-03-02T12:58:22.262391708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-x86ql,Uid:e94766e6-0a94-44da-adf7-e646b07431ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.264122 kubelet[2603]: E0302 12:58:22.263065 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.265638 kubelet[2603]: E0302 12:58:22.265545 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d7f6b6d6-x86ql" Mar 2 12:58:22.265638 kubelet[2603]: E0302 12:58:22.265596 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d7f6b6d6-x86ql" Mar 2 12:58:22.266088 kubelet[2603]: E0302 12:58:22.266013 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d7f6b6d6-x86ql_calico-system(e94766e6-0a94-44da-adf7-e646b07431ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d7f6b6d6-x86ql_calico-system(e94766e6-0a94-44da-adf7-e646b07431ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d7f6b6d6-x86ql" podUID="e94766e6-0a94-44da-adf7-e646b07431ce" Mar 2 12:58:22.282209 containerd[1470]: time="2026-03-02T12:58:22.281426209Z" level=error msg="Failed to destroy network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.288008 containerd[1470]: time="2026-03-02T12:58:22.287443986Z" level=error msg="encountered an error cleaning up failed sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.288008 containerd[1470]: time="2026-03-02T12:58:22.287764513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbc66965c-97zk4,Uid:7cb3f66b-acf0-4b43-aaea-8ad73a2971a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.288836 kubelet[2603]: E0302 12:58:22.288759 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.288836 kubelet[2603]: E0302 12:58:22.288902 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bbc66965c-97zk4" Mar 2 12:58:22.289076 kubelet[2603]: E0302 12:58:22.288938 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bbc66965c-97zk4" Mar 2 12:58:22.289076 kubelet[2603]: E0302 12:58:22.289023 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bbc66965c-97zk4_calico-system(7cb3f66b-acf0-4b43-aaea-8ad73a2971a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bbc66965c-97zk4_calico-system(7cb3f66b-acf0-4b43-aaea-8ad73a2971a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bbc66965c-97zk4" podUID="7cb3f66b-acf0-4b43-aaea-8ad73a2971a4" Mar 2 12:58:22.309537 containerd[1470]: time="2026-03-02T12:58:22.309452820Z" level=error msg="Failed to destroy network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.311367 containerd[1470]: time="2026-03-02T12:58:22.311187197Z" level=error msg="encountered an error cleaning up failed sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.311685 containerd[1470]: time="2026-03-02T12:58:22.311408178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-867c96dbbc-vh6px,Uid:eb7eb2f6-04fe-4439-a025-db21095b18ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.312253 kubelet[2603]: E0302 12:58:22.312140 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.312611 kubelet[2603]: E0302 12:58:22.312251 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-867c96dbbc-vh6px" Mar 2 12:58:22.312611 kubelet[2603]: E0302 12:58:22.312359 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-867c96dbbc-vh6px" Mar 2 12:58:22.312611 kubelet[2603]: E0302 12:58:22.312438 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-867c96dbbc-vh6px_calico-system(eb7eb2f6-04fe-4439-a025-db21095b18ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-867c96dbbc-vh6px_calico-system(eb7eb2f6-04fe-4439-a025-db21095b18ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-867c96dbbc-vh6px" podUID="eb7eb2f6-04fe-4439-a025-db21095b18ae" Mar 2 12:58:22.363895 containerd[1470]: time="2026-03-02T12:58:22.363708938Z" level=error msg="Failed to destroy network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.369171 containerd[1470]: time="2026-03-02T12:58:22.368714992Z" level=error msg="encountered an error cleaning up failed sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.369661 containerd[1470]: time="2026-03-02T12:58:22.369402962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktfwf,Uid:6027383d-96c9-465b-88ba-00723209fa19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.370430 kubelet[2603]: E0302 12:58:22.370068 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:22.370430 kubelet[2603]: E0302 12:58:22.370148 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktfwf" Mar 2 12:58:22.370430 kubelet[2603]: E0302 12:58:22.370176 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktfwf" Mar 2 12:58:22.370599 kubelet[2603]: E0302 12:58:22.370257 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ktfwf_calico-system(6027383d-96c9-465b-88ba-00723209fa19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ktfwf_calico-system(6027383d-96c9-465b-88ba-00723209fa19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:22.812340 kubelet[2603]: I0302 12:58:22.810480 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:22.826407 containerd[1470]: time="2026-03-02T12:58:22.826347129Z" level=info msg="StopPodSandbox for \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\"" Mar 2 12:58:22.830214 kubelet[2603]: I0302 12:58:22.829953 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:58:22.832956 containerd[1470]: time="2026-03-02T12:58:22.831637146Z" level=info msg="StopPodSandbox for \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\"" Mar 2 12:58:22.832956 containerd[1470]: time="2026-03-02T12:58:22.831916245Z" level=info msg="Ensure that sandbox 4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9 in task-service has been cleanup successfully" Mar 2 12:58:22.832956 containerd[1470]: time="2026-03-02T12:58:22.831645367Z" level=info msg="Ensure that sandbox bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954 in task-service has been cleanup successfully" Mar 2 12:58:22.839457 kubelet[2603]: I0302 12:58:22.838757 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:58:22.860535 containerd[1470]: time="2026-03-02T12:58:22.860420326Z" level=info msg="StopPodSandbox for \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\"" Mar 2 12:58:22.861215 containerd[1470]: time="2026-03-02T12:58:22.861129135Z" level=info msg="Ensure that sandbox 106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743 in task-service has been cleanup successfully" Mar 2 12:58:22.869397 kubelet[2603]: I0302 12:58:22.865479 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:58:22.889633 containerd[1470]: time="2026-03-02T12:58:22.876508968Z" level=info msg="StopPodSandbox for \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\"" Mar 2 12:58:22.889633 containerd[1470]: time="2026-03-02T12:58:22.889563757Z" level=info msg="Ensure that sandbox f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d in task-service has been cleanup successfully" Mar 2 12:58:22.898392 kubelet[2603]: I0302 12:58:22.898058 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:58:22.899762 containerd[1470]: time="2026-03-02T12:58:22.899692451Z" level=info msg="StopPodSandbox for \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\"" Mar 2 12:58:22.905506 containerd[1470]: time="2026-03-02T12:58:22.905455736Z" level=info msg="Ensure that sandbox a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1 in task-service has been cleanup successfully" Mar 2 12:58:22.907416 kubelet[2603]: I0302 12:58:22.906438 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:58:22.918400 containerd[1470]: time="2026-03-02T12:58:22.914731057Z" level=info msg="StopPodSandbox for \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\"" Mar 2 12:58:22.918400 containerd[1470]: time="2026-03-02T12:58:22.915134318Z" level=info msg="Ensure that sandbox 641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca in task-service has been cleanup successfully" Mar 2 12:58:22.951177 kubelet[2603]: I0302 12:58:22.945104 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:58:22.953371 containerd[1470]: time="2026-03-02T12:58:22.953269035Z" level=info msg="StopPodSandbox for \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\"" Mar 2 12:58:22.953779 containerd[1470]: time="2026-03-02T12:58:22.953753004Z" level=info msg="Ensure that sandbox 14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0 in task-service has been cleanup successfully" Mar 2 12:58:22.957119 kubelet[2603]: I0302 12:58:22.956620 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:58:22.957804 containerd[1470]: time="2026-03-02T12:58:22.957716282Z" level=info msg="StopPodSandbox for \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\"" Mar 2 12:58:22.958099 containerd[1470]: time="2026-03-02T12:58:22.958021499Z" level=info msg="Ensure that sandbox b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187 in task-service has been cleanup successfully" Mar 2 12:58:23.000788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d-shm.mount: Deactivated successfully. Mar 2 12:58:23.002627 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9-shm.mount: Deactivated successfully. Mar 2 12:58:23.002738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187-shm.mount: Deactivated successfully. Mar 2 12:58:23.002907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954-shm.mount: Deactivated successfully. Mar 2 12:58:23.267454 containerd[1470]: time="2026-03-02T12:58:23.267271438Z" level=error msg="StopPodSandbox for \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\" failed" error="failed to destroy network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.268238 kubelet[2603]: E0302 12:58:23.267960 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:23.269141 kubelet[2603]: E0302 12:58:23.269018 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954"} Mar 2 12:58:23.269225 kubelet[2603]: E0302 12:58:23.269158 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.269445 kubelet[2603]: E0302 12:58:23.269240 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bbc66965c-97zk4" podUID="7cb3f66b-acf0-4b43-aaea-8ad73a2971a4" Mar 2 12:58:23.298362 containerd[1470]: time="2026-03-02T12:58:23.292550951Z" level=error msg="StopPodSandbox for \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\" failed" error="failed to destroy network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.298555 kubelet[2603]: E0302 12:58:23.295567 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:58:23.298555 kubelet[2603]: E0302 12:58:23.295642 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0"} Mar 2 12:58:23.298555 kubelet[2603]: E0302 12:58:23.295683 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a17f4e37-d122-4cb7-a268-77a8710e322d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.298555 kubelet[2603]: E0302 12:58:23.295720 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a17f4e37-d122-4cb7-a268-77a8710e322d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4pk99" podUID="a17f4e37-d122-4cb7-a268-77a8710e322d" Mar 2 12:58:23.300678 containerd[1470]: time="2026-03-02T12:58:23.300609162Z" level=error msg="StopPodSandbox for \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\" failed" error="failed to destroy network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.301806 kubelet[2603]: E0302 12:58:23.301603 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:58:23.302212 kubelet[2603]: E0302 12:58:23.302177 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743"} Mar 2 12:58:23.303120 kubelet[2603]: E0302 12:58:23.302637 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82df1e0b-93ca-4d37-95cc-8fb21c222566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.303120 kubelet[2603]: E0302 12:58:23.303062 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82df1e0b-93ca-4d37-95cc-8fb21c222566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lt57c" podUID="82df1e0b-93ca-4d37-95cc-8fb21c222566" Mar 2 12:58:23.346591 containerd[1470]: time="2026-03-02T12:58:23.346221358Z" level=error msg="StopPodSandbox for \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\" failed" error="failed to destroy network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.348544 kubelet[2603]: E0302 12:58:23.347723 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:58:23.348544 kubelet[2603]: E0302 12:58:23.347796 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d"} Mar 2 12:58:23.348544 kubelet[2603]: E0302 12:58:23.347895 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6027383d-96c9-465b-88ba-00723209fa19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.348544 kubelet[2603]: E0302 12:58:23.347953 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6027383d-96c9-465b-88ba-00723209fa19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ktfwf" podUID="6027383d-96c9-465b-88ba-00723209fa19" Mar 2 12:58:23.367088 containerd[1470]: time="2026-03-02T12:58:23.367015923Z" level=error msg="StopPodSandbox for \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\" failed" error="failed to destroy network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.368175 kubelet[2603]: E0302 12:58:23.368008 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:58:23.368175 kubelet[2603]: E0302 12:58:23.368085 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1"} Mar 2 12:58:23.369735 kubelet[2603]: E0302 12:58:23.368136 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83de2541-d5e5-4ff8-9f13-b519d8c21fab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.369735 kubelet[2603]: E0302 12:58:23.369581 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83de2541-d5e5-4ff8-9f13-b519d8c21fab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6778944f75-frr9d" podUID="83de2541-d5e5-4ff8-9f13-b519d8c21fab" Mar 2 12:58:23.419383 containerd[1470]: time="2026-03-02T12:58:23.418555566Z" level=error msg="StopPodSandbox for \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\" failed" error="failed to destroy network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.419641 kubelet[2603]: E0302 12:58:23.419190 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:58:23.419641 kubelet[2603]: E0302 12:58:23.419255 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9"} Mar 2 12:58:23.419641 kubelet[2603]: E0302 12:58:23.419383 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e94766e6-0a94-44da-adf7-e646b07431ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.419641 kubelet[2603]: E0302 12:58:23.419436 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e94766e6-0a94-44da-adf7-e646b07431ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d7f6b6d6-x86ql" podUID="e94766e6-0a94-44da-adf7-e646b07431ce" Mar 2 12:58:23.450711 containerd[1470]: time="2026-03-02T12:58:23.444205863Z" level=error msg="StopPodSandbox for \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\" failed" error="failed to destroy network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.453227 kubelet[2603]: E0302 12:58:23.452929 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:58:23.453227 kubelet[2603]: E0302 12:58:23.453007 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca"} Mar 2 12:58:23.453997 kubelet[2603]: E0302 12:58:23.453957 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e246d66-0a6c-4a48-969d-096fb291a66e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.454263 kubelet[2603]: E0302 12:58:23.454025 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e246d66-0a6c-4a48-969d-096fb291a66e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6778944f75-mnf89" podUID="9e246d66-0a6c-4a48-969d-096fb291a66e" Mar 2 12:58:23.485544 containerd[1470]: time="2026-03-02T12:58:23.485465643Z" level=error msg="StopPodSandbox for \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\" failed" error="failed to destroy network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:23.486377 kubelet[2603]: E0302 12:58:23.486027 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:58:23.486377 kubelet[2603]: E0302 12:58:23.486130 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187"} Mar 2 12:58:23.486377 kubelet[2603]: E0302 12:58:23.486182 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb7eb2f6-04fe-4439-a025-db21095b18ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:23.486377 kubelet[2603]: E0302 12:58:23.486228 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb7eb2f6-04fe-4439-a025-db21095b18ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-867c96dbbc-vh6px" podUID="eb7eb2f6-04fe-4439-a025-db21095b18ae" Mar 2 12:58:24.037638 containerd[1470]: time="2026-03-02T12:58:24.036727512Z" level=info msg="StopPodSandbox for \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\"" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.619 [INFO][4052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.625 [INFO][4052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" iface="eth0" netns="/var/run/netns/cni-7b4545f6-93c6-8e43-1db2-6d2db2b3c631" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.628 [INFO][4052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" iface="eth0" netns="/var/run/netns/cni-7b4545f6-93c6-8e43-1db2-6d2db2b3c631" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.653 [INFO][4052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" iface="eth0" netns="/var/run/netns/cni-7b4545f6-93c6-8e43-1db2-6d2db2b3c631" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.653 [INFO][4052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.653 [INFO][4052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.773 [INFO][4061] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.774 [INFO][4061] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.774 [INFO][4061] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.799 [WARNING][4061] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.799 [INFO][4061] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.825 [INFO][4061] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:24.858391 containerd[1470]: 2026-03-02 12:58:24.849 [INFO][4052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:24.862815 systemd[1]: run-netns-cni\x2d7b4545f6\x2d93c6\x2d8e43\x2d1db2\x2d6d2db2b3c631.mount: Deactivated successfully. Mar 2 12:58:24.869081 containerd[1470]: time="2026-03-02T12:58:24.868948559Z" level=info msg="TearDown network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\" successfully" Mar 2 12:58:24.869081 containerd[1470]: time="2026-03-02T12:58:24.869056882Z" level=info msg="StopPodSandbox for \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\" returns successfully" Mar 2 12:58:25.066997 kubelet[2603]: I0302 12:58:25.063936 2603 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-nginx-config\") pod \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " Mar 2 12:58:25.066997 kubelet[2603]: I0302 12:58:25.064120 2603 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-ca-bundle\") pod \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " Mar 2 12:58:25.066997 kubelet[2603]: I0302 12:58:25.064174 2603 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-backend-key-pair\") pod \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " Mar 2 12:58:25.066997 kubelet[2603]: I0302 12:58:25.064203 2603 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kcj9\" (UniqueName: \"kubernetes.io/projected/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-kube-api-access-2kcj9\") pod \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\" (UID: \"7cb3f66b-acf0-4b43-aaea-8ad73a2971a4\") " Mar 2 12:58:25.073585 kubelet[2603]: I0302 12:58:25.072246 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4" (UID: "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:58:25.073585 kubelet[2603]: I0302 12:58:25.072563 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4" (UID: "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:58:25.099379 kubelet[2603]: I0302 12:58:25.096624 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-kube-api-access-2kcj9" (OuterVolumeSpecName: "kube-api-access-2kcj9") pod "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4" (UID: "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4"). InnerVolumeSpecName "kube-api-access-2kcj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 12:58:25.098440 systemd[1]: var-lib-kubelet-pods-7cb3f66b\x2dacf0\x2d4b43\x2daaea\x2d8ad73a2971a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2kcj9.mount: Deactivated successfully. Mar 2 12:58:25.115786 systemd[1]: var-lib-kubelet-pods-7cb3f66b\x2dacf0\x2d4b43\x2daaea\x2d8ad73a2971a4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 12:58:25.125745 kubelet[2603]: I0302 12:58:25.116257 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4" (UID: "7cb3f66b-acf0-4b43-aaea-8ad73a2971a4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 12:58:25.165550 kubelet[2603]: I0302 12:58:25.165173 2603 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 2 12:58:25.165550 kubelet[2603]: I0302 12:58:25.165254 2603 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 2 12:58:25.165550 kubelet[2603]: I0302 12:58:25.165271 2603 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 2 12:58:25.165550 kubelet[2603]: I0302 12:58:25.165364 2603 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2kcj9\" (UniqueName: \"kubernetes.io/projected/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4-kube-api-access-2kcj9\") on node \"localhost\" DevicePath \"\"" Mar 2 12:58:25.378201 systemd[1]: Removed slice kubepods-besteffort-pod7cb3f66b_acf0_4b43_aaea_8ad73a2971a4.slice - libcontainer container kubepods-besteffort-pod7cb3f66b_acf0_4b43_aaea_8ad73a2971a4.slice. Mar 2 12:58:25.944038 systemd[1]: Created slice kubepods-besteffort-pod66da8b7d_8210_4b67_866c_1c172b5798f8.slice - libcontainer container kubepods-besteffort-pod66da8b7d_8210_4b67_866c_1c172b5798f8.slice. Mar 2 12:58:25.963197 kubelet[2603]: I0302 12:58:25.962760 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwsrt\" (UniqueName: \"kubernetes.io/projected/66da8b7d-8210-4b67-866c-1c172b5798f8-kube-api-access-rwsrt\") pod \"whisker-67ffbdf6dc-b5mzq\" (UID: \"66da8b7d-8210-4b67-866c-1c172b5798f8\") " pod="calico-system/whisker-67ffbdf6dc-b5mzq" Mar 2 12:58:25.963197 kubelet[2603]: I0302 12:58:25.963027 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/66da8b7d-8210-4b67-866c-1c172b5798f8-nginx-config\") pod \"whisker-67ffbdf6dc-b5mzq\" (UID: \"66da8b7d-8210-4b67-866c-1c172b5798f8\") " pod="calico-system/whisker-67ffbdf6dc-b5mzq" Mar 2 12:58:25.963197 kubelet[2603]: I0302 12:58:25.963061 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66da8b7d-8210-4b67-866c-1c172b5798f8-whisker-backend-key-pair\") pod \"whisker-67ffbdf6dc-b5mzq\" (UID: \"66da8b7d-8210-4b67-866c-1c172b5798f8\") " pod="calico-system/whisker-67ffbdf6dc-b5mzq" Mar 2 12:58:25.963197 kubelet[2603]: I0302 12:58:25.963085 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66da8b7d-8210-4b67-866c-1c172b5798f8-whisker-ca-bundle\") pod \"whisker-67ffbdf6dc-b5mzq\" (UID: \"66da8b7d-8210-4b67-866c-1c172b5798f8\") " pod="calico-system/whisker-67ffbdf6dc-b5mzq" Mar 2 12:58:26.289600 containerd[1470]: time="2026-03-02T12:58:26.289536703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67ffbdf6dc-b5mzq,Uid:66da8b7d-8210-4b67-866c-1c172b5798f8,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:26.663918 kubelet[2603]: I0302 12:58:26.663073 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb3f66b-acf0-4b43-aaea-8ad73a2971a4" path="/var/lib/kubelet/pods/7cb3f66b-acf0-4b43-aaea-8ad73a2971a4/volumes" Mar 2 12:58:27.810513 systemd-networkd[1386]: calia5505b5e8bb: Link UP Mar 2 12:58:27.819008 systemd-networkd[1386]: calia5505b5e8bb: Gained carrier Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:26.654 [ERROR][4179] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:26.868 [INFO][4179] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0 whisker-67ffbdf6dc- calico-system 66da8b7d-8210-4b67-866c-1c172b5798f8 1115 0 2026-03-02 12:58:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67ffbdf6dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-67ffbdf6dc-b5mzq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia5505b5e8bb [] [] }} ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:26.873 [INFO][4179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.075 [INFO][4202] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" HandleID="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Workload="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.124 [INFO][4202] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" HandleID="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Workload="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00046c720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-67ffbdf6dc-b5mzq", "timestamp":"2026-03-02 12:58:27.075232309 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000702420)} Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.195 [INFO][4202] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.201 [INFO][4202] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.202 [INFO][4202] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.264 [INFO][4202] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.288 [INFO][4202] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.405 [INFO][4202] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.412 [INFO][4202] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.445 [INFO][4202] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.447 [INFO][4202] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.465 [INFO][4202] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.494 [INFO][4202] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.614 [INFO][4202] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.648 [INFO][4202] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" host="localhost" Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.648 [INFO][4202] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:28.018527 containerd[1470]: 2026-03-02 12:58:27.648 [INFO][4202] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" HandleID="k8s-pod-network.d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Workload="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" Mar 2 12:58:28.044221 containerd[1470]: 2026-03-02 12:58:27.688 [INFO][4179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0", GenerateName:"whisker-67ffbdf6dc-", Namespace:"calico-system", SelfLink:"", UID:"66da8b7d-8210-4b67-866c-1c172b5798f8", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67ffbdf6dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-67ffbdf6dc-b5mzq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5505b5e8bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:28.044221 containerd[1470]: 2026-03-02 12:58:27.691 [INFO][4179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" Mar 2 12:58:28.044221 containerd[1470]: 2026-03-02 12:58:27.692 [INFO][4179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5505b5e8bb ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" Mar 2 12:58:28.044221 containerd[1470]: 2026-03-02 12:58:27.879 [INFO][4179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" Mar 2 12:58:28.044221 containerd[1470]: 2026-03-02 12:58:27.880 [INFO][4179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0", GenerateName:"whisker-67ffbdf6dc-", Namespace:"calico-system", SelfLink:"", UID:"66da8b7d-8210-4b67-866c-1c172b5798f8", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67ffbdf6dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d", Pod:"whisker-67ffbdf6dc-b5mzq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5505b5e8bb", MAC:"56:5a:37:c8:ad:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:28.044221 containerd[1470]: 2026-03-02 12:58:27.986 [INFO][4179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d" Namespace="calico-system" Pod="whisker-67ffbdf6dc-b5mzq" WorkloadEndpoint="localhost-k8s-whisker--67ffbdf6dc--b5mzq-eth0" Mar 2 12:58:28.517104 containerd[1470]: time="2026-03-02T12:58:28.515819566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:28.517104 containerd[1470]: time="2026-03-02T12:58:28.515989683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:28.517104 containerd[1470]: time="2026-03-02T12:58:28.516011223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:28.517104 containerd[1470]: time="2026-03-02T12:58:28.516132789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:28.588386 kernel: calico-node[4117]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 12:58:28.617439 systemd[1]: Started cri-containerd-d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d.scope - libcontainer container d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d. Mar 2 12:58:28.762820 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:29.001164 containerd[1470]: time="2026-03-02T12:58:29.000187654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67ffbdf6dc-b5mzq,Uid:66da8b7d-8210-4b67-866c-1c172b5798f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d\"" Mar 2 12:58:29.014731 containerd[1470]: time="2026-03-02T12:58:29.014671911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 12:58:29.290792 systemd-networkd[1386]: calia5505b5e8bb: Gained IPv6LL Mar 2 12:58:30.415532 systemd-networkd[1386]: vxlan.calico: Link UP Mar 2 12:58:30.415547 systemd-networkd[1386]: vxlan.calico: Gained carrier Mar 2 12:58:30.700260 containerd[1470]: time="2026-03-02T12:58:30.698249450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:30.710103 containerd[1470]: time="2026-03-02T12:58:30.709144698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=6036825" Mar 2 12:58:30.715637 containerd[1470]: time="2026-03-02T12:58:30.715536947Z" level=info msg="ImageCreate event name:\"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:30.755165 containerd[1470]: time="2026-03-02T12:58:30.755105851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:30.760706 containerd[1470]: time="2026-03-02T12:58:30.760475888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7592862\" in 1.745743414s" Mar 2 12:58:30.760706 containerd[1470]: time="2026-03-02T12:58:30.760568972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\"" Mar 2 12:58:30.793047 containerd[1470]: time="2026-03-02T12:58:30.791096316Z" level=info msg="CreateContainer within sandbox \"d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 12:58:30.892576 containerd[1470]: time="2026-03-02T12:58:30.892262101Z" level=info msg="CreateContainer within sandbox \"d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bb7a1608b13065f81f455ec483bd579bc464c4a81b0d11a084634f6dda95ab69\"" Mar 2 12:58:30.897078 containerd[1470]: time="2026-03-02T12:58:30.895719138Z" level=info msg="StartContainer for \"bb7a1608b13065f81f455ec483bd579bc464c4a81b0d11a084634f6dda95ab69\"" Mar 2 12:58:30.997621 systemd[1]: run-containerd-runc-k8s.io-bb7a1608b13065f81f455ec483bd579bc464c4a81b0d11a084634f6dda95ab69-runc.LsB9aN.mount: Deactivated successfully. Mar 2 12:58:31.027469 systemd[1]: Started cri-containerd-bb7a1608b13065f81f455ec483bd579bc464c4a81b0d11a084634f6dda95ab69.scope - libcontainer container bb7a1608b13065f81f455ec483bd579bc464c4a81b0d11a084634f6dda95ab69. Mar 2 12:58:31.224209 containerd[1470]: time="2026-03-02T12:58:31.224142235Z" level=info msg="StartContainer for \"bb7a1608b13065f81f455ec483bd579bc464c4a81b0d11a084634f6dda95ab69\" returns successfully" Mar 2 12:58:31.235658 containerd[1470]: time="2026-03-02T12:58:31.231999248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 12:58:31.601339 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Mar 2 12:58:34.665392 containerd[1470]: time="2026-03-02T12:58:34.653831393Z" level=info msg="StopPodSandbox for \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\"" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.322 [INFO][4443] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.323 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" iface="eth0" netns="/var/run/netns/cni-be0fc745-bfe5-a906-a20c-9b67f114c810" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.324 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" iface="eth0" netns="/var/run/netns/cni-be0fc745-bfe5-a906-a20c-9b67f114c810" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.324 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" iface="eth0" netns="/var/run/netns/cni-be0fc745-bfe5-a906-a20c-9b67f114c810" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.324 [INFO][4443] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.324 [INFO][4443] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.480 [INFO][4452] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.480 [INFO][4452] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.481 [INFO][4452] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.499 [WARNING][4452] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.499 [INFO][4452] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.518 [INFO][4452] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:35.581597 containerd[1470]: 2026-03-02 12:58:35.546 [INFO][4443] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:58:35.590563 containerd[1470]: time="2026-03-02T12:58:35.590503986Z" level=info msg="TearDown network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\" successfully" Mar 2 12:58:35.590796 containerd[1470]: time="2026-03-02T12:58:35.590660106Z" level=info msg="StopPodSandbox for \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\" returns successfully" Mar 2 12:58:35.617267 systemd[1]: run-netns-cni\x2dbe0fc745\x2dbfe5\x2da906\x2da20c\x2d9b67f114c810.mount: Deactivated successfully. Mar 2 12:58:35.638169 containerd[1470]: time="2026-03-02T12:58:35.636984771Z" level=info msg="StopPodSandbox for \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\"" Mar 2 12:58:35.638390 containerd[1470]: time="2026-03-02T12:58:35.638202998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktfwf,Uid:6027383d-96c9-465b-88ba-00723209fa19,Namespace:calico-system,Attempt:1,}" Mar 2 12:58:36.132596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191879265.mount: Deactivated successfully. Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:35.939 [INFO][4470] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:35.940 [INFO][4470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" iface="eth0" netns="/var/run/netns/cni-add2eba5-528e-7a74-a5ad-a44f2e2dc6d1" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:35.940 [INFO][4470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" iface="eth0" netns="/var/run/netns/cni-add2eba5-528e-7a74-a5ad-a44f2e2dc6d1" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:35.940 [INFO][4470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" iface="eth0" netns="/var/run/netns/cni-add2eba5-528e-7a74-a5ad-a44f2e2dc6d1" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:35.940 [INFO][4470] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:35.940 [INFO][4470] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:36.047 [INFO][4491] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:36.049 [INFO][4491] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:36.049 [INFO][4491] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:36.095 [WARNING][4491] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:36.095 [INFO][4491] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:36.116 [INFO][4491] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:36.199797 containerd[1470]: 2026-03-02 12:58:36.133 [INFO][4470] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:58:36.216936 containerd[1470]: time="2026-03-02T12:58:36.202637843Z" level=info msg="TearDown network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\" successfully" Mar 2 12:58:36.216936 containerd[1470]: time="2026-03-02T12:58:36.203239944Z" level=info msg="StopPodSandbox for \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\" returns successfully" Mar 2 12:58:36.212374 systemd[1]: run-netns-cni\x2dadd2eba5\x2d528e\x2d7a74\x2da5ad\x2da44f2e2dc6d1.mount: Deactivated successfully. Mar 2 12:58:36.232500 containerd[1470]: time="2026-03-02T12:58:36.231593840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-frr9d,Uid:83de2541-d5e5-4ff8-9f13-b519d8c21fab,Namespace:calico-system,Attempt:1,}" Mar 2 12:58:36.323016 containerd[1470]: time="2026-03-02T12:58:36.317459421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:36.341091 containerd[1470]: time="2026-03-02T12:58:36.339600377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=17599119" Mar 2 12:58:36.349895 containerd[1470]: time="2026-03-02T12:58:36.346407426Z" level=info msg="ImageCreate event name:\"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:36.387558 containerd[1470]: time="2026-03-02T12:58:36.386544313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:36.391821 containerd[1470]: time="2026-03-02T12:58:36.391650733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"17598949\" in 5.159569331s" Mar 2 12:58:36.391821 containerd[1470]: time="2026-03-02T12:58:36.391766469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\"" Mar 2 12:58:36.431256 containerd[1470]: time="2026-03-02T12:58:36.430689109Z" level=info msg="CreateContainer within sandbox \"d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 12:58:36.539033 containerd[1470]: time="2026-03-02T12:58:36.537479169Z" level=info msg="CreateContainer within sandbox \"d8c34f8e24b6d039758a41f9e84ef347f5ed3aa06e083bda8ab21561ee77829d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"238d44c5e6d43b383ab5351d9cbf99ac9b286abe56897b73c81839e8f5998551\"" Mar 2 12:58:36.546771 containerd[1470]: time="2026-03-02T12:58:36.546669306Z" level=info msg="StartContainer for \"238d44c5e6d43b383ab5351d9cbf99ac9b286abe56897b73c81839e8f5998551\"" Mar 2 12:58:36.609011 containerd[1470]: time="2026-03-02T12:58:36.606813610Z" level=info msg="StopPodSandbox for \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\"" Mar 2 12:58:36.661995 containerd[1470]: time="2026-03-02T12:58:36.660185833Z" level=info msg="StopPodSandbox for \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\"" Mar 2 12:58:36.664890 containerd[1470]: time="2026-03-02T12:58:36.664570288Z" level=info msg="StopPodSandbox for \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\"" Mar 2 12:58:36.664890 containerd[1470]: time="2026-03-02T12:58:36.664743710Z" level=info msg="StopPodSandbox for \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\"" Mar 2 12:58:36.795684 systemd[1]: run-containerd-runc-k8s.io-238d44c5e6d43b383ab5351d9cbf99ac9b286abe56897b73c81839e8f5998551-runc.nYmH6Q.mount: Deactivated successfully. Mar 2 12:58:36.810565 systemd[1]: Started cri-containerd-238d44c5e6d43b383ab5351d9cbf99ac9b286abe56897b73c81839e8f5998551.scope - libcontainer container 238d44c5e6d43b383ab5351d9cbf99ac9b286abe56897b73c81839e8f5998551. Mar 2 12:58:36.870957 systemd-networkd[1386]: calicbc0bc5b616: Link UP Mar 2 12:58:36.872147 systemd-networkd[1386]: calicbc0bc5b616: Gained carrier Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:35.952 [INFO][4476] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ktfwf-eth0 csi-node-driver- calico-system 6027383d-96c9-465b-88ba-00723209fa19 1146 0 2026-03-02 12:57:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6db5596769 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ktfwf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicbc0bc5b616 [] [] }} ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:35.956 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.249 [INFO][4498] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" HandleID="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.299 [INFO][4498] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" HandleID="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ca2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ktfwf", "timestamp":"2026-03-02 12:58:36.249713114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000c2420)} Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.299 [INFO][4498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.300 [INFO][4498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.300 [INFO][4498] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.330 [INFO][4498] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.382 [INFO][4498] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.451 [INFO][4498] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.474 [INFO][4498] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.504 [INFO][4498] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.504 [INFO][4498] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.531 [INFO][4498] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936 Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.578 [INFO][4498] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.770 [INFO][4498] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.771 [INFO][4498] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" host="localhost" Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.771 [INFO][4498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:37.362766 containerd[1470]: 2026-03-02 12:58:36.771 [INFO][4498] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" HandleID="k8s-pod-network.e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:37.364636 containerd[1470]: 2026-03-02 12:58:36.835 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktfwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6027383d-96c9-465b-88ba-00723209fa19", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6db5596769", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ktfwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbc0bc5b616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:37.364636 containerd[1470]: 2026-03-02 12:58:36.836 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:37.364636 containerd[1470]: 2026-03-02 12:58:36.836 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbc0bc5b616 ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:37.364636 containerd[1470]: 2026-03-02 12:58:36.898 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:37.364636 containerd[1470]: 2026-03-02 12:58:36.933 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktfwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6027383d-96c9-465b-88ba-00723209fa19", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6db5596769", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936", Pod:"csi-node-driver-ktfwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbc0bc5b616", MAC:"c6:a5:f8:cb:b4:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:37.364636 containerd[1470]: 2026-03-02 12:58:37.259 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936" Namespace="calico-system" Pod="csi-node-driver-ktfwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:58:37.698560 containerd[1470]: time="2026-03-02T12:58:37.690440389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:37.698560 containerd[1470]: time="2026-03-02T12:58:37.690538501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:37.698560 containerd[1470]: time="2026-03-02T12:58:37.690562656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:37.698560 containerd[1470]: time="2026-03-02T12:58:37.690693439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.250 [WARNING][4548] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" WorkloadEndpoint="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.258 [INFO][4548] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.259 [INFO][4548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" iface="eth0" netns="" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.259 [INFO][4548] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.261 [INFO][4548] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.450 [INFO][4631] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.451 [INFO][4631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.459 [INFO][4631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.564 [WARNING][4631] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.564 [INFO][4631] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.591 [INFO][4631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:37.698560 containerd[1470]: 2026-03-02 12:58:37.637 [INFO][4548] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:37.698560 containerd[1470]: time="2026-03-02T12:58:37.695262255Z" level=info msg="TearDown network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\" successfully" Mar 2 12:58:37.698560 containerd[1470]: time="2026-03-02T12:58:37.695368442Z" level=info msg="StopPodSandbox for \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\" returns successfully" Mar 2 12:58:37.794007 containerd[1470]: time="2026-03-02T12:58:37.792405847Z" level=info msg="StartContainer for \"238d44c5e6d43b383ab5351d9cbf99ac9b286abe56897b73c81839e8f5998551\" returns successfully" Mar 2 12:58:37.809120 containerd[1470]: time="2026-03-02T12:58:37.809040133Z" level=info msg="RemovePodSandbox for \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\"" Mar 2 12:58:37.815630 containerd[1470]: time="2026-03-02T12:58:37.809441941Z" level=info msg="Forcibly stopping sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\"" Mar 2 12:58:38.018460 systemd[1]: Started cri-containerd-e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936.scope - libcontainer container e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936. Mar 2 12:58:38.125265 systemd-networkd[1386]: cali97369de38f5: Link UP Mar 2 12:58:38.127073 systemd-networkd[1386]: cali97369de38f5: Gained carrier Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:36.712 [INFO][4511] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0 calico-apiserver-6778944f75- calico-system 83de2541-d5e5-4ff8-9f13-b519d8c21fab 1148 0 2026-03-02 12:57:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6778944f75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6778944f75-frr9d eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali97369de38f5 [] [] }} ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:36.713 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.648 [INFO][4581] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" HandleID="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.747 [INFO][4581] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" HandleID="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005349e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6778944f75-frr9d", "timestamp":"2026-03-02 12:58:37.648400204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000410580)} Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.748 [INFO][4581] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.748 [INFO][4581] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.748 [INFO][4581] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.780 [INFO][4581] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.872 [INFO][4581] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.904 [INFO][4581] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.916 [INFO][4581] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.947 [INFO][4581] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.947 [INFO][4581] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:37.960 [INFO][4581] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0 Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:38.060 [INFO][4581] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:38.083 [INFO][4581] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:38.083 [INFO][4581] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" host="localhost" Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:38.083 [INFO][4581] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:38.204045 containerd[1470]: 2026-03-02 12:58:38.083 [INFO][4581] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" HandleID="k8s-pod-network.efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:38.205467 containerd[1470]: 2026-03-02 12:58:38.095 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"83de2541-d5e5-4ff8-9f13-b519d8c21fab", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6778944f75-frr9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali97369de38f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:38.205467 containerd[1470]: 2026-03-02 12:58:38.096 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:38.205467 containerd[1470]: 2026-03-02 12:58:38.096 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97369de38f5 ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:38.205467 containerd[1470]: 2026-03-02 12:58:38.124 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:38.205467 containerd[1470]: 2026-03-02 12:58:38.130 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"83de2541-d5e5-4ff8-9f13-b519d8c21fab", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0", Pod:"calico-apiserver-6778944f75-frr9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali97369de38f5", MAC:"f6:6c:1a:c9:55:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:38.205467 containerd[1470]: 2026-03-02 12:58:38.185 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0" Namespace="calico-system" Pod="calico-apiserver-6778944f75-frr9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:58:38.272622 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:37.578 [INFO][4601] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:37.578 [INFO][4601] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" iface="eth0" netns="/var/run/netns/cni-2e1b38e8-ecfc-49ef-77ef-5cb12149d849" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:37.590 [INFO][4601] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" iface="eth0" netns="/var/run/netns/cni-2e1b38e8-ecfc-49ef-77ef-5cb12149d849" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:37.592 [INFO][4601] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" iface="eth0" netns="/var/run/netns/cni-2e1b38e8-ecfc-49ef-77ef-5cb12149d849" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:37.592 [INFO][4601] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:37.592 [INFO][4601] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:38.171 [INFO][4668] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:38.173 [INFO][4668] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:38.178 [INFO][4668] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:38.265 [WARNING][4668] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:38.267 [INFO][4668] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:38.347 [INFO][4668] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:38.382148 containerd[1470]: 2026-03-02 12:58:38.370 [INFO][4601] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:58:38.389560 containerd[1470]: time="2026-03-02T12:58:38.387523514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktfwf,Uid:6027383d-96c9-465b-88ba-00723209fa19,Namespace:calico-system,Attempt:1,} returns sandbox id \"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936\"" Mar 2 12:58:38.397488 containerd[1470]: time="2026-03-02T12:58:38.397172877Z" level=info msg="TearDown network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\" successfully" Mar 2 12:58:38.397488 containerd[1470]: time="2026-03-02T12:58:38.397221837Z" level=info msg="StopPodSandbox for \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\" returns successfully" Mar 2 12:58:38.402604 systemd[1]: run-netns-cni\x2d2e1b38e8\x2decfc\x2d49ef\x2d77ef\x2d5cb12149d849.mount: Deactivated successfully. Mar 2 12:58:38.410903 containerd[1470]: time="2026-03-02T12:58:38.410472970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 12:58:38.410903 containerd[1470]: time="2026-03-02T12:58:38.410688377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-867c96dbbc-vh6px,Uid:eb7eb2f6-04fe-4439-a025-db21095b18ae,Namespace:calico-system,Attempt:1,}" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:37.654 [INFO][4591] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:37.655 [INFO][4591] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" iface="eth0" netns="/var/run/netns/cni-425cb9cb-6255-27a9-cb61-a815f7b7c9b5" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:37.662 [INFO][4591] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" iface="eth0" netns="/var/run/netns/cni-425cb9cb-6255-27a9-cb61-a815f7b7c9b5" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:37.677 [INFO][4591] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" iface="eth0" netns="/var/run/netns/cni-425cb9cb-6255-27a9-cb61-a815f7b7c9b5" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:37.677 [INFO][4591] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:37.677 [INFO][4591] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:38.188 [INFO][4682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:38.192 [INFO][4682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:38.349 [INFO][4682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:38.374 [WARNING][4682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:38.374 [INFO][4682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:38.388 [INFO][4682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:38.436785 containerd[1470]: 2026-03-02 12:58:38.428 [INFO][4591] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:58:38.437511 containerd[1470]: time="2026-03-02T12:58:38.437401219Z" level=info msg="TearDown network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\" successfully" Mar 2 12:58:38.437511 containerd[1470]: time="2026-03-02T12:58:38.437440182Z" level=info msg="StopPodSandbox for \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\" returns successfully" Mar 2 12:58:38.445199 systemd[1]: run-netns-cni\x2d425cb9cb\x2d6255\x2d27a9\x2dcb61\x2da815f7b7c9b5.mount: Deactivated successfully. Mar 2 12:58:38.455389 kubelet[2603]: E0302 12:58:38.451729 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:38.459339 containerd[1470]: time="2026-03-02T12:58:38.459225908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4pk99,Uid:a17f4e37-d122-4cb7-a268-77a8710e322d,Namespace:kube-system,Attempt:1,}" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.142 [INFO][4619] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.142 [INFO][4619] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" iface="eth0" netns="/var/run/netns/cni-2fcee16d-d110-ae35-a71e-ccf5d99a861e" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.143 [INFO][4619] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" iface="eth0" netns="/var/run/netns/cni-2fcee16d-d110-ae35-a71e-ccf5d99a861e" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.150 [INFO][4619] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" iface="eth0" netns="/var/run/netns/cni-2fcee16d-d110-ae35-a71e-ccf5d99a861e" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.150 [INFO][4619] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.150 [INFO][4619] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.446 [INFO][4749] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.448 [INFO][4749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.450 [INFO][4749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.486 [WARNING][4749] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.487 [INFO][4749] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.497 [INFO][4749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:38.558076 containerd[1470]: 2026-03-02 12:58:38.519 [INFO][4619] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:58:38.576133 containerd[1470]: time="2026-03-02T12:58:38.571813548Z" level=info msg="TearDown network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\" successfully" Mar 2 12:58:38.576133 containerd[1470]: time="2026-03-02T12:58:38.572183837Z" level=info msg="StopPodSandbox for \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\" returns successfully" Mar 2 12:58:38.576133 containerd[1470]: time="2026-03-02T12:58:38.554552354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:38.576133 containerd[1470]: time="2026-03-02T12:58:38.554658021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:38.576133 containerd[1470]: time="2026-03-02T12:58:38.554681594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:38.576133 containerd[1470]: time="2026-03-02T12:58:38.554849037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:38.584235 containerd[1470]: time="2026-03-02T12:58:38.584164490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-x86ql,Uid:e94766e6-0a94-44da-adf7-e646b07431ce,Namespace:calico-system,Attempt:1,}" Mar 2 12:58:38.645490 kubelet[2603]: E0302 12:58:38.641705 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:38.650413 containerd[1470]: time="2026-03-02T12:58:38.650364553Z" level=info msg="StopPodSandbox for \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\"" Mar 2 12:58:38.661677 containerd[1470]: time="2026-03-02T12:58:38.661623316Z" level=info msg="StopPodSandbox for \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\"" Mar 2 12:58:38.711587 systemd[1]: Started cri-containerd-efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0.scope - libcontainer container efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0. Mar 2 12:58:38.722085 kubelet[2603]: I0302 12:58:38.711583 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-67ffbdf6dc-b5mzq" podStartSLOduration=6.31649209 podStartE2EDuration="13.71156353s" podCreationTimestamp="2026-03-02 12:58:25 +0000 UTC" firstStartedPulling="2026-03-02 12:58:29.0099983 +0000 UTC m=+113.037965247" lastFinishedPulling="2026-03-02 12:58:36.405069738 +0000 UTC m=+120.433036687" observedRunningTime="2026-03-02 12:58:38.711002995 +0000 UTC m=+122.738969974" watchObservedRunningTime="2026-03-02 12:58:38.71156353 +0000 UTC m=+122.739530477" Mar 2 12:58:38.789187 systemd[1]: run-netns-cni\x2d2fcee16d\x2dd110\x2dae35\x2da71e\x2dccf5d99a861e.mount: Deactivated successfully. Mar 2 12:58:38.829496 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.363 [WARNING][4726] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" WorkloadEndpoint="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.363 [INFO][4726] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.363 [INFO][4726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" iface="eth0" netns="" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.363 [INFO][4726] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.363 [INFO][4726] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.561 [INFO][4772] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.562 [INFO][4772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.562 [INFO][4772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.709 [WARNING][4772] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.709 [INFO][4772] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" HandleID="k8s-pod-network.bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Workload="localhost-k8s-whisker--7bbc66965c--97zk4-eth0" Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.761 [INFO][4772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:38.885210 containerd[1470]: 2026-03-02 12:58:38.838 [INFO][4726] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954" Mar 2 12:58:38.889906 containerd[1470]: time="2026-03-02T12:58:38.886161428Z" level=info msg="TearDown network for sandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\" successfully" Mar 2 12:58:38.910572 systemd-networkd[1386]: calicbc0bc5b616: Gained IPv6LL Mar 2 12:58:39.005242 containerd[1470]: time="2026-03-02T12:58:39.005088782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:58:39.005742 containerd[1470]: time="2026-03-02T12:58:39.005621004Z" level=info msg="RemovePodSandbox \"bee6b2e850872591f1f3b14b015097d736a02fb2d6288a01435f2127174c8954\" returns successfully" Mar 2 12:58:39.157522 containerd[1470]: time="2026-03-02T12:58:39.156826037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-frr9d,Uid:83de2541-d5e5-4ff8-9f13-b519d8c21fab,Namespace:calico-system,Attempt:1,} returns sandbox id \"efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0\"" Mar 2 12:58:39.403786 systemd-networkd[1386]: cali97369de38f5: Gained IPv6LL Mar 2 12:58:39.843460 systemd-networkd[1386]: cali9957d044f00: Link UP Mar 2 12:58:39.859321 systemd-networkd[1386]: cali9957d044f00: Gained carrier Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.197 [INFO][4816] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--4pk99-eth0 coredns-66bc5c9577- kube-system a17f4e37-d122-4cb7-a268-77a8710e322d 1162 0 2026-03-02 12:56:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-4pk99 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9957d044f00 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.199 [INFO][4816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.497 [INFO][4925] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" HandleID="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.547 [INFO][4925] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" HandleID="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-4pk99", "timestamp":"2026-03-02 12:58:39.497244606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001122c0)} Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.550 [INFO][4925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.552 [INFO][4925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.552 [INFO][4925] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.567 [INFO][4925] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.609 [INFO][4925] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.661 [INFO][4925] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.678 [INFO][4925] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.734 [INFO][4925] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.734 [INFO][4925] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.757 [INFO][4925] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68 Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.770 [INFO][4925] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.805 [INFO][4925] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.806 [INFO][4925] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" host="localhost" Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.806 [INFO][4925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:39.983388 containerd[1470]: 2026-03-02 12:58:39.806 [INFO][4925] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" HandleID="k8s-pod-network.57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:39.986427 containerd[1470]: 2026-03-02 12:58:39.815 [INFO][4816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4pk99-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a17f4e37-d122-4cb7-a268-77a8710e322d", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-4pk99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9957d044f00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:39.986427 containerd[1470]: 2026-03-02 12:58:39.816 [INFO][4816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:39.986427 containerd[1470]: 2026-03-02 12:58:39.816 [INFO][4816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9957d044f00 ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:39.986427 containerd[1470]: 2026-03-02 12:58:39.876 [INFO][4816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:39.986427 containerd[1470]: 2026-03-02 12:58:39.884 [INFO][4816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4pk99-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a17f4e37-d122-4cb7-a268-77a8710e322d", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68", Pod:"coredns-66bc5c9577-4pk99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9957d044f00", MAC:"da:b3:72:2f:aa:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:39.986427 containerd[1470]: 2026-03-02 12:58:39.941 [INFO][4816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68" Namespace="kube-system" Pod="coredns-66bc5c9577-4pk99" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:58:40.218246 containerd[1470]: time="2026-03-02T12:58:40.216661072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:40.218246 containerd[1470]: time="2026-03-02T12:58:40.216939209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:40.218246 containerd[1470]: time="2026-03-02T12:58:40.216998960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:40.218246 containerd[1470]: time="2026-03-02T12:58:40.217429983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:40.235920 systemd-networkd[1386]: cali359911602cc: Link UP Mar 2 12:58:40.237389 systemd-networkd[1386]: cali359911602cc: Gained carrier Mar 2 12:58:40.328807 systemd[1]: run-containerd-runc-k8s.io-57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68-runc.Hn77gW.mount: Deactivated successfully. Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.362 [INFO][4849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0 goldmane-54d7f6b6d6- calico-system e94766e6-0a94-44da-adf7-e646b07431ce 1166 0 2026-03-02 12:57:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d7f6b6d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d7f6b6d6-x86ql eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali359911602cc [] [] }} ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.362 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.496 [INFO][4939] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" HandleID="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.553 [INFO][4939] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" HandleID="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042f450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d7f6b6d6-x86ql", "timestamp":"2026-03-02 12:58:39.496846747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005b71e0)} Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.553 [INFO][4939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.810 [INFO][4939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.810 [INFO][4939] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.819 [INFO][4939] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.868 [INFO][4939] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.951 [INFO][4939] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.976 [INFO][4939] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.998 [INFO][4939] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:39.998 [INFO][4939] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:40.013 [INFO][4939] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:40.044 [INFO][4939] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:40.067 [INFO][4939] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:40.067 [INFO][4939] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" host="localhost" Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:40.068 [INFO][4939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:40.366036 containerd[1470]: 2026-03-02 12:58:40.068 [INFO][4939] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" HandleID="k8s-pod-network.7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:40.368028 containerd[1470]: 2026-03-02 12:58:40.084 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"e94766e6-0a94-44da-adf7-e646b07431ce", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d7f6b6d6-x86ql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali359911602cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:40.368028 containerd[1470]: 2026-03-02 12:58:40.084 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:40.368028 containerd[1470]: 2026-03-02 12:58:40.084 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali359911602cc ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:40.368028 containerd[1470]: 2026-03-02 12:58:40.256 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:40.368028 containerd[1470]: 2026-03-02 12:58:40.259 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"e94766e6-0a94-44da-adf7-e646b07431ce", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e", Pod:"goldmane-54d7f6b6d6-x86ql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali359911602cc", MAC:"ba:0d:77:46:2c:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:40.368028 containerd[1470]: 2026-03-02 12:58:40.348 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-x86ql" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:58:40.369539 systemd[1]: Started cri-containerd-57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68.scope - libcontainer container 57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68. Mar 2 12:58:40.467056 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:40.473787 containerd[1470]: time="2026-03-02T12:58:40.472993907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:40.473787 containerd[1470]: time="2026-03-02T12:58:40.473213535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:40.473787 containerd[1470]: time="2026-03-02T12:58:40.473242499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:40.475481 containerd[1470]: time="2026-03-02T12:58:40.473729296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.275 [INFO][4867] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.278 [INFO][4867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" iface="eth0" netns="/var/run/netns/cni-01d4c087-0dc6-6866-6778-1e2003e13c4e" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.280 [INFO][4867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" iface="eth0" netns="/var/run/netns/cni-01d4c087-0dc6-6866-6778-1e2003e13c4e" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.283 [INFO][4867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" iface="eth0" netns="/var/run/netns/cni-01d4c087-0dc6-6866-6778-1e2003e13c4e" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.283 [INFO][4867] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.283 [INFO][4867] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.610 [INFO][4926] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:39.610 [INFO][4926] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:40.495 [INFO][4926] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:40.563 [WARNING][4926] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:40.563 [INFO][4926] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:40.570 [INFO][4926] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:40.662626 containerd[1470]: 2026-03-02 12:58:40.609 [INFO][4867] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:58:40.670640 systemd[1]: run-netns-cni\x2d01d4c087\x2d0dc6\x2d6866\x2d6778\x2d1e2003e13c4e.mount: Deactivated successfully. Mar 2 12:58:40.671711 containerd[1470]: time="2026-03-02T12:58:40.671596514Z" level=info msg="TearDown network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\" successfully" Mar 2 12:58:40.671711 containerd[1470]: time="2026-03-02T12:58:40.671648010Z" level=info msg="StopPodSandbox for \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\" returns successfully" Mar 2 12:58:40.675200 systemd-networkd[1386]: cali9027afd2620: Link UP Mar 2 12:58:40.699093 systemd-networkd[1386]: cali9027afd2620: Gained carrier Mar 2 12:58:40.702200 kubelet[2603]: E0302 12:58:40.702161 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:40.708092 containerd[1470]: time="2026-03-02T12:58:40.707996636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lt57c,Uid:82df1e0b-93ca-4d37-95cc-8fb21c222566,Namespace:kube-system,Attempt:1,}" Mar 2 12:58:40.737926 systemd[1]: Started cri-containerd-7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e.scope - libcontainer container 7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e. Mar 2 12:58:40.772995 containerd[1470]: time="2026-03-02T12:58:40.772731794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4pk99,Uid:a17f4e37-d122-4cb7-a268-77a8710e322d,Namespace:kube-system,Attempt:1,} returns sandbox id \"57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68\"" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.539 [INFO][4887] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.543 [INFO][4887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" iface="eth0" netns="/var/run/netns/cni-da2c2e67-a0af-e096-09b4-b8bbb8ba5ad2" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.546 [INFO][4887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" iface="eth0" netns="/var/run/netns/cni-da2c2e67-a0af-e096-09b4-b8bbb8ba5ad2" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.547 [INFO][4887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" iface="eth0" netns="/var/run/netns/cni-da2c2e67-a0af-e096-09b4-b8bbb8ba5ad2" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.547 [INFO][4887] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.547 [INFO][4887] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.658 [INFO][4957] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:39.658 [INFO][4957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:40.570 [INFO][4957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:40.695 [WARNING][4957] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:40.698 [INFO][4957] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:40.714 [INFO][4957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:40.775446 containerd[1470]: 2026-03-02 12:58:40.770 [INFO][4887] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:58:40.777032 containerd[1470]: time="2026-03-02T12:58:40.776801798Z" level=info msg="TearDown network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\" successfully" Mar 2 12:58:40.777032 containerd[1470]: time="2026-03-02T12:58:40.776848175Z" level=info msg="StopPodSandbox for \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\" returns successfully" Mar 2 12:58:40.780961 kubelet[2603]: E0302 12:58:40.778619 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:40.805204 containerd[1470]: time="2026-03-02T12:58:40.804436471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-mnf89,Uid:9e246d66-0a6c-4a48-969d-096fb291a66e,Namespace:calico-system,Attempt:1,}" Mar 2 12:58:40.815739 containerd[1470]: time="2026-03-02T12:58:40.815521068Z" level=info msg="CreateContainer within sandbox \"57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:39.095 [INFO][4798] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0 calico-kube-controllers-867c96dbbc- calico-system eb7eb2f6-04fe-4439-a025-db21095b18ae 1160 0 2026-03-02 12:57:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:867c96dbbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-867c96dbbc-vh6px eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9027afd2620 [] [] }} ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:39.098 [INFO][4798] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:39.557 [INFO][4916] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" HandleID="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:39.608 [INFO][4916] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" HandleID="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122a90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-867c96dbbc-vh6px", "timestamp":"2026-03-02 12:58:39.557141995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000372420)} Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:39.608 [INFO][4916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.071 [INFO][4916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.073 [INFO][4916] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.103 [INFO][4916] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.240 [INFO][4916] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.356 [INFO][4916] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.365 [INFO][4916] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.378 [INFO][4916] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.383 [INFO][4916] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.401 [INFO][4916] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5 Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.446 [INFO][4916] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.494 [INFO][4916] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.494 [INFO][4916] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" host="localhost" Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.494 [INFO][4916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:40.848265 containerd[1470]: 2026-03-02 12:58:40.494 [INFO][4916] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" HandleID="k8s-pod-network.513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:40.850487 containerd[1470]: 2026-03-02 12:58:40.520 [INFO][4798] cni-plugin/k8s.go 418: Populated endpoint ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0", GenerateName:"calico-kube-controllers-867c96dbbc-", Namespace:"calico-system", SelfLink:"", UID:"eb7eb2f6-04fe-4439-a025-db21095b18ae", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"867c96dbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-867c96dbbc-vh6px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9027afd2620", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:40.850487 containerd[1470]: 2026-03-02 12:58:40.567 [INFO][4798] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:40.850487 containerd[1470]: 2026-03-02 12:58:40.567 [INFO][4798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9027afd2620 ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:40.850487 containerd[1470]: 2026-03-02 12:58:40.692 [INFO][4798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:40.850487 containerd[1470]: 2026-03-02 12:58:40.694 [INFO][4798] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0", GenerateName:"calico-kube-controllers-867c96dbbc-", Namespace:"calico-system", SelfLink:"", UID:"eb7eb2f6-04fe-4439-a025-db21095b18ae", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"867c96dbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5", Pod:"calico-kube-controllers-867c96dbbc-vh6px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9027afd2620", MAC:"96:77:eb:04:86:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:40.850487 containerd[1470]: 2026-03-02 12:58:40.812 [INFO][4798] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5" Namespace="calico-system" Pod="calico-kube-controllers-867c96dbbc-vh6px" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:58:40.887912 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:40.979677 containerd[1470]: time="2026-03-02T12:58:40.979481140Z" level=info msg="CreateContainer within sandbox \"57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6761764c7d7614e8389b030a94e2ba9d0b849bbab329b0c1de2350197803bb08\"" Mar 2 12:58:40.996951 containerd[1470]: time="2026-03-02T12:58:40.996642015Z" level=info msg="StartContainer for \"6761764c7d7614e8389b030a94e2ba9d0b849bbab329b0c1de2350197803bb08\"" Mar 2 12:58:41.037008 containerd[1470]: time="2026-03-02T12:58:41.035448159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:41.037008 containerd[1470]: time="2026-03-02T12:58:41.035554097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:41.037008 containerd[1470]: time="2026-03-02T12:58:41.035573623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:41.037008 containerd[1470]: time="2026-03-02T12:58:41.035727289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:41.078918 containerd[1470]: time="2026-03-02T12:58:41.077745332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-x86ql,Uid:e94766e6-0a94-44da-adf7-e646b07431ce,Namespace:calico-system,Attempt:1,} returns sandbox id \"7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e\"" Mar 2 12:58:41.114635 systemd[1]: Started cri-containerd-513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5.scope - libcontainer container 513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5. Mar 2 12:58:41.159186 systemd[1]: Started cri-containerd-6761764c7d7614e8389b030a94e2ba9d0b849bbab329b0c1de2350197803bb08.scope - libcontainer container 6761764c7d7614e8389b030a94e2ba9d0b849bbab329b0c1de2350197803bb08. Mar 2 12:58:41.193638 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:41.226674 containerd[1470]: time="2026-03-02T12:58:41.226405602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:41.230842 containerd[1470]: time="2026-03-02T12:58:41.230738347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8793087" Mar 2 12:58:41.231385 containerd[1470]: time="2026-03-02T12:58:41.231351650Z" level=info msg="ImageCreate event name:\"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:41.257068 systemd[1]: run-netns-cni\x2dda2c2e67\x2da0af\x2de096\x2d09b4\x2db8bbb8ba5ad2.mount: Deactivated successfully. Mar 2 12:58:41.263023 containerd[1470]: time="2026-03-02T12:58:41.262733695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:41.281127 containerd[1470]: time="2026-03-02T12:58:41.279919928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"10349132\" in 2.869386966s" Mar 2 12:58:41.281127 containerd[1470]: time="2026-03-02T12:58:41.279982265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\"" Mar 2 12:58:41.295917 containerd[1470]: time="2026-03-02T12:58:41.295054690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:58:41.317401 containerd[1470]: time="2026-03-02T12:58:41.317229539Z" level=info msg="StartContainer for \"6761764c7d7614e8389b030a94e2ba9d0b849bbab329b0c1de2350197803bb08\" returns successfully" Mar 2 12:58:41.324711 containerd[1470]: time="2026-03-02T12:58:41.320856469Z" level=info msg="CreateContainer within sandbox \"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 12:58:41.391735 containerd[1470]: time="2026-03-02T12:58:41.391672883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-867c96dbbc-vh6px,Uid:eb7eb2f6-04fe-4439-a025-db21095b18ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5\"" Mar 2 12:58:41.480190 containerd[1470]: time="2026-03-02T12:58:41.478738679Z" level=info msg="CreateContainer within sandbox \"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2c242e1198eddf303b658cb051d051c0fa8682ccb64c3436a2a5b8e4f37d1032\"" Mar 2 12:58:41.486007 containerd[1470]: time="2026-03-02T12:58:41.485961843Z" level=info msg="StartContainer for \"2c242e1198eddf303b658cb051d051c0fa8682ccb64c3436a2a5b8e4f37d1032\"" Mar 2 12:58:41.576966 systemd-networkd[1386]: cali9957d044f00: Gained IPv6LL Mar 2 12:58:41.639162 systemd[1]: run-containerd-runc-k8s.io-2c242e1198eddf303b658cb051d051c0fa8682ccb64c3436a2a5b8e4f37d1032-runc.K4JiDu.mount: Deactivated successfully. Mar 2 12:58:41.652699 systemd[1]: Started cri-containerd-2c242e1198eddf303b658cb051d051c0fa8682ccb64c3436a2a5b8e4f37d1032.scope - libcontainer container 2c242e1198eddf303b658cb051d051c0fa8682ccb64c3436a2a5b8e4f37d1032. Mar 2 12:58:41.685371 kubelet[2603]: E0302 12:58:41.683541 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:41.694118 systemd-networkd[1386]: cali5ec77602dbb: Link UP Mar 2 12:58:41.701707 systemd-networkd[1386]: cali5ec77602dbb: Gained carrier Mar 2 12:58:41.747198 kubelet[2603]: I0302 12:58:41.747083 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4pk99" podStartSLOduration=119.746995949 podStartE2EDuration="1m59.746995949s" podCreationTimestamp="2026-03-02 12:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:58:41.746566477 +0000 UTC m=+125.774533454" watchObservedRunningTime="2026-03-02 12:58:41.746995949 +0000 UTC m=+125.774962897" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.172 [INFO][5101] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--lt57c-eth0 coredns-66bc5c9577- kube-system 82df1e0b-93ca-4d37-95cc-8fb21c222566 1183 0 2026-03-02 12:56:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-lt57c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ec77602dbb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.173 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.302 [INFO][5200] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" HandleID="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.415 [INFO][5200] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" HandleID="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00068a0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-lt57c", "timestamp":"2026-03-02 12:58:41.302065147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e0000)} Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.415 [INFO][5200] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.415 [INFO][5200] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.415 [INFO][5200] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.442 [INFO][5200] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.474 [INFO][5200] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.513 [INFO][5200] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.529 [INFO][5200] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.545 [INFO][5200] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.545 [INFO][5200] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.555 [INFO][5200] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.571 [INFO][5200] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.622 [INFO][5200] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.625 [INFO][5200] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" host="localhost" Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.626 [INFO][5200] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:41.799554 containerd[1470]: 2026-03-02 12:58:41.626 [INFO][5200] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" HandleID="k8s-pod-network.61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:41.801595 containerd[1470]: 2026-03-02 12:58:41.684 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lt57c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"82df1e0b-93ca-4d37-95cc-8fb21c222566", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-lt57c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec77602dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:41.801595 containerd[1470]: 2026-03-02 12:58:41.684 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:41.801595 containerd[1470]: 2026-03-02 12:58:41.684 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec77602dbb ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:41.801595 containerd[1470]: 2026-03-02 12:58:41.713 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:41.801595 containerd[1470]: 2026-03-02 12:58:41.715 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lt57c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"82df1e0b-93ca-4d37-95cc-8fb21c222566", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a", Pod:"coredns-66bc5c9577-lt57c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec77602dbb", MAC:"86:84:90:db:71:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:41.801595 containerd[1470]: 2026-03-02 12:58:41.781 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a" Namespace="kube-system" Pod="coredns-66bc5c9577-lt57c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:58:41.857389 containerd[1470]: time="2026-03-02T12:58:41.854807683Z" level=info msg="StartContainer for \"2c242e1198eddf303b658cb051d051c0fa8682ccb64c3436a2a5b8e4f37d1032\" returns successfully" Mar 2 12:58:41.955158 containerd[1470]: time="2026-03-02T12:58:41.953255276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:41.955158 containerd[1470]: time="2026-03-02T12:58:41.953794471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:41.955158 containerd[1470]: time="2026-03-02T12:58:41.954073069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:41.961087 containerd[1470]: time="2026-03-02T12:58:41.954651666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:41.989634 systemd-networkd[1386]: cali58d682b7fe7: Link UP Mar 2 12:58:41.991800 systemd-networkd[1386]: cali58d682b7fe7: Gained carrier Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.159 [INFO][5112] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0 calico-apiserver-6778944f75- calico-system 9e246d66-0a6c-4a48-969d-096fb291a66e 1184 0 2026-03-02 12:57:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6778944f75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6778944f75-mnf89 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali58d682b7fe7 [] [] }} ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.159 [INFO][5112] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.400 [INFO][5201] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" HandleID="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.437 [INFO][5201] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" HandleID="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000474530), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6778944f75-mnf89", "timestamp":"2026-03-02 12:58:41.400228018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00037a000)} Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.437 [INFO][5201] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.628 [INFO][5201] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.628 [INFO][5201] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.670 [INFO][5201] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.705 [INFO][5201] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.769 [INFO][5201] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.785 [INFO][5201] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.800 [INFO][5201] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.800 [INFO][5201] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.815 [INFO][5201] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6 Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.846 [INFO][5201] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.936 [INFO][5201] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.939 [INFO][5201] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" host="localhost" Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.940 [INFO][5201] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:58:42.072390 containerd[1470]: 2026-03-02 12:58:41.940 [INFO][5201] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" HandleID="k8s-pod-network.726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:42.077837 containerd[1470]: 2026-03-02 12:58:41.954 [INFO][5112] cni-plugin/k8s.go 418: Populated endpoint ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"9e246d66-0a6c-4a48-969d-096fb291a66e", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6778944f75-mnf89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali58d682b7fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:42.077837 containerd[1470]: 2026-03-02 12:58:41.955 [INFO][5112] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:42.077837 containerd[1470]: 2026-03-02 12:58:41.955 [INFO][5112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58d682b7fe7 ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:42.077837 containerd[1470]: 2026-03-02 12:58:41.992 [INFO][5112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:42.077837 containerd[1470]: 2026-03-02 12:58:41.993 [INFO][5112] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"9e246d66-0a6c-4a48-969d-096fb291a66e", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6", Pod:"calico-apiserver-6778944f75-mnf89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali58d682b7fe7", MAC:"9e:09:21:b4:62:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:58:42.077837 containerd[1470]: 2026-03-02 12:58:42.056 [INFO][5112] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6" Namespace="calico-system" Pod="calico-apiserver-6778944f75-mnf89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:58:42.077187 systemd[1]: Started cri-containerd-61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a.scope - libcontainer container 61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a. Mar 2 12:58:42.092996 systemd-networkd[1386]: cali359911602cc: Gained IPv6LL Mar 2 12:58:42.130140 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:42.152693 systemd-networkd[1386]: cali9027afd2620: Gained IPv6LL Mar 2 12:58:42.192843 containerd[1470]: time="2026-03-02T12:58:42.191066251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:42.192843 containerd[1470]: time="2026-03-02T12:58:42.191370337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:42.192843 containerd[1470]: time="2026-03-02T12:58:42.191400784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:42.192843 containerd[1470]: time="2026-03-02T12:58:42.191697597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:42.235084 containerd[1470]: time="2026-03-02T12:58:42.235031975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lt57c,Uid:82df1e0b-93ca-4d37-95cc-8fb21c222566,Namespace:kube-system,Attempt:1,} returns sandbox id \"61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a\"" Mar 2 12:58:42.238371 kubelet[2603]: E0302 12:58:42.237130 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:42.286101 containerd[1470]: time="2026-03-02T12:58:42.286047734Z" level=info msg="CreateContainer within sandbox \"61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:58:42.302185 systemd[1]: Started cri-containerd-726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6.scope - libcontainer container 726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6. Mar 2 12:58:42.359258 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:58:42.373221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656154172.mount: Deactivated successfully. Mar 2 12:58:42.404523 containerd[1470]: time="2026-03-02T12:58:42.403837783Z" level=info msg="CreateContainer within sandbox \"61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96eb2ebbf0fc4ca203d6231075709bb7f721077a82ffce1a99b4dd6774914e1d\"" Mar 2 12:58:42.408844 containerd[1470]: time="2026-03-02T12:58:42.408673624Z" level=info msg="StartContainer for \"96eb2ebbf0fc4ca203d6231075709bb7f721077a82ffce1a99b4dd6774914e1d\"" Mar 2 12:58:42.510632 containerd[1470]: time="2026-03-02T12:58:42.510575851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778944f75-mnf89,Uid:9e246d66-0a6c-4a48-969d-096fb291a66e,Namespace:calico-system,Attempt:1,} returns sandbox id \"726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6\"" Mar 2 12:58:42.532187 systemd[1]: Started cri-containerd-96eb2ebbf0fc4ca203d6231075709bb7f721077a82ffce1a99b4dd6774914e1d.scope - libcontainer container 96eb2ebbf0fc4ca203d6231075709bb7f721077a82ffce1a99b4dd6774914e1d. Mar 2 12:58:42.683273 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:57588.service - OpenSSH per-connection server daemon (10.0.0.1:57588). Mar 2 12:58:43.096047 containerd[1470]: time="2026-03-02T12:58:43.095703778Z" level=info msg="StartContainer for \"96eb2ebbf0fc4ca203d6231075709bb7f721077a82ffce1a99b4dd6774914e1d\" returns successfully" Mar 2 12:58:43.218570 kubelet[2603]: E0302 12:58:43.211422 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:43.264810 sshd[5426]: Accepted publickey for core from 10.0.0.1 port 57588 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:58:43.305207 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:43.338736 systemd-logind[1440]: New session 8 of user core. Mar 2 12:58:43.352814 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 12:58:43.497074 systemd-networkd[1386]: cali5ec77602dbb: Gained IPv6LL Mar 2 12:58:43.833536 systemd-networkd[1386]: cali58d682b7fe7: Gained IPv6LL Mar 2 12:58:44.024838 sshd[5426]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:44.048232 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:57588.service: Deactivated successfully. Mar 2 12:58:44.058242 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 12:58:44.069222 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Mar 2 12:58:44.077474 systemd-logind[1440]: Removed session 8. Mar 2 12:58:44.249125 kubelet[2603]: E0302 12:58:44.248760 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:44.249979 kubelet[2603]: E0302 12:58:44.249405 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:44.397552 kubelet[2603]: I0302 12:58:44.396702 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lt57c" podStartSLOduration=122.396681125 podStartE2EDuration="2m2.396681125s" podCreationTimestamp="2026-03-02 12:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:58:44.331095561 +0000 UTC m=+128.359062559" watchObservedRunningTime="2026-03-02 12:58:44.396681125 +0000 UTC m=+128.424648094" Mar 2 12:58:45.400035 kubelet[2603]: E0302 12:58:45.395229 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:46.411416 kubelet[2603]: E0302 12:58:46.401402 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:49.357117 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:50994.service - OpenSSH per-connection server daemon (10.0.0.1:50994). Mar 2 12:58:49.748221 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 50994 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:58:49.752649 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:49.781759 systemd-logind[1440]: New session 9 of user core. Mar 2 12:58:49.910815 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 12:58:51.555464 sshd[5497]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:51.594747 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:50994.service: Deactivated successfully. Mar 2 12:58:51.608674 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 12:58:51.632540 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Mar 2 12:58:51.647539 systemd-logind[1440]: Removed session 9. Mar 2 12:58:53.107020 containerd[1470]: time="2026-03-02T12:58:53.106207992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:53.113527 containerd[1470]: time="2026-03-02T12:58:53.109164131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=48403149" Mar 2 12:58:53.128700 containerd[1470]: time="2026-03-02T12:58:53.128430087Z" level=info msg="ImageCreate event name:\"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:53.136056 containerd[1470]: time="2026-03-02T12:58:53.135261278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:53.143226 containerd[1470]: time="2026-03-02T12:58:53.140962134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 11.845817997s" Mar 2 12:58:53.143226 containerd[1470]: time="2026-03-02T12:58:53.141037875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:58:53.158721 containerd[1470]: time="2026-03-02T12:58:53.158531211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 12:58:53.180233 containerd[1470]: time="2026-03-02T12:58:53.178049992Z" level=info msg="CreateContainer within sandbox \"efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:58:53.288683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373537323.mount: Deactivated successfully. Mar 2 12:58:53.292057 containerd[1470]: time="2026-03-02T12:58:53.289736466Z" level=info msg="CreateContainer within sandbox \"efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d6d399fd224b53541d1d66f18183a1a89d9d52b5fb29ba09efdd9288e3443ff6\"" Mar 2 12:58:53.296243 containerd[1470]: time="2026-03-02T12:58:53.295975775Z" level=info msg="StartContainer for \"d6d399fd224b53541d1d66f18183a1a89d9d52b5fb29ba09efdd9288e3443ff6\"" Mar 2 12:58:53.530001 systemd[1]: Started cri-containerd-d6d399fd224b53541d1d66f18183a1a89d9d52b5fb29ba09efdd9288e3443ff6.scope - libcontainer container d6d399fd224b53541d1d66f18183a1a89d9d52b5fb29ba09efdd9288e3443ff6. Mar 2 12:58:53.736438 containerd[1470]: time="2026-03-02T12:58:53.734092017Z" level=info msg="StartContainer for \"d6d399fd224b53541d1d66f18183a1a89d9d52b5fb29ba09efdd9288e3443ff6\" returns successfully" Mar 2 12:58:56.616007 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:51004.service - OpenSSH per-connection server daemon (10.0.0.1:51004). Mar 2 12:58:56.980069 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 51004 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:58:56.993133 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:57.019394 systemd-logind[1440]: New session 10 of user core. Mar 2 12:58:57.045479 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 12:58:58.602029 sshd[5608]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:58.652110 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:51004.service: Deactivated successfully. Mar 2 12:58:58.663681 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 12:58:58.668562 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Mar 2 12:58:58.694601 systemd-logind[1440]: Removed session 10. Mar 2 12:59:00.901960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4077463423.mount: Deactivated successfully. Mar 2 12:59:02.056366 kubelet[2603]: I0302 12:59:02.052950 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6778944f75-frr9d" podStartSLOduration=95.063262228 podStartE2EDuration="1m49.052875929s" podCreationTimestamp="2026-03-02 12:57:13 +0000 UTC" firstStartedPulling="2026-03-02 12:58:39.163186377 +0000 UTC m=+123.191153326" lastFinishedPulling="2026-03-02 12:58:53.152800079 +0000 UTC m=+137.180767027" observedRunningTime="2026-03-02 12:58:54.96534666 +0000 UTC m=+138.993313608" watchObservedRunningTime="2026-03-02 12:59:02.052875929 +0000 UTC m=+146.080842887" Mar 2 12:59:02.632958 kubelet[2603]: E0302 12:59:02.632419 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:03.693488 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:39138.service - OpenSSH per-connection server daemon (10.0.0.1:39138). Mar 2 12:59:04.102367 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 39138 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:04.107123 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:04.165349 systemd-logind[1440]: New session 11 of user core. Mar 2 12:59:04.191699 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 12:59:04.939053 containerd[1470]: time="2026-03-02T12:59:04.938946004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:04.943392 sshd[5637]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:04.949443 containerd[1470]: time="2026-03-02T12:59:04.948151117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=55607954" Mar 2 12:59:04.955983 containerd[1470]: time="2026-03-02T12:59:04.954408199Z" level=info msg="ImageCreate event name:\"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:04.956043 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:39138.service: Deactivated successfully. Mar 2 12:59:04.962763 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 12:59:04.966235 containerd[1470]: time="2026-03-02T12:59:04.964151680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:04.966235 containerd[1470]: time="2026-03-02T12:59:04.966070002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"55607800\" in 11.807270581s" Mar 2 12:59:04.966235 containerd[1470]: time="2026-03-02T12:59:04.966111049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\"" Mar 2 12:59:04.968470 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Mar 2 12:59:04.969975 containerd[1470]: time="2026-03-02T12:59:04.969875320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 12:59:04.971244 systemd-logind[1440]: Removed session 11. Mar 2 12:59:04.994824 containerd[1470]: time="2026-03-02T12:59:04.994667417Z" level=info msg="CreateContainer within sandbox \"7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 12:59:05.063026 containerd[1470]: time="2026-03-02T12:59:05.062850036Z" level=info msg="CreateContainer within sandbox \"7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fceafdf5f43e8dca41a3315323b5559a55a0430fc1410f68626ab099ad0afbf7\"" Mar 2 12:59:05.064579 containerd[1470]: time="2026-03-02T12:59:05.064143556Z" level=info msg="StartContainer for \"fceafdf5f43e8dca41a3315323b5559a55a0430fc1410f68626ab099ad0afbf7\"" Mar 2 12:59:05.467051 systemd[1]: Started cri-containerd-fceafdf5f43e8dca41a3315323b5559a55a0430fc1410f68626ab099ad0afbf7.scope - libcontainer container fceafdf5f43e8dca41a3315323b5559a55a0430fc1410f68626ab099ad0afbf7. Mar 2 12:59:05.689142 containerd[1470]: time="2026-03-02T12:59:05.688834735Z" level=info msg="StartContainer for \"fceafdf5f43e8dca41a3315323b5559a55a0430fc1410f68626ab099ad0afbf7\" returns successfully" Mar 2 12:59:09.991274 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:40184.service - OpenSSH per-connection server daemon (10.0.0.1:40184). Mar 2 12:59:11.509362 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 40184 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:11.515595 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:11.574975 systemd-logind[1440]: New session 12 of user core. Mar 2 12:59:11.666655 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 12:59:12.619400 kubelet[2603]: E0302 12:59:12.618440 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:13.876761 sshd[5768]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:13.901858 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Mar 2 12:59:13.903600 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:40184.service: Deactivated successfully. Mar 2 12:59:13.909169 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 12:59:13.914237 systemd-logind[1440]: Removed session 12. Mar 2 12:59:18.444426 containerd[1470]: time="2026-03-02T12:59:18.443873374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:18.445389 containerd[1470]: time="2026-03-02T12:59:18.444688993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=52396348" Mar 2 12:59:18.447182 containerd[1470]: time="2026-03-02T12:59:18.447107522Z" level=info msg="ImageCreate event name:\"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:18.462574 containerd[1470]: time="2026-03-02T12:59:18.462448903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:18.471118 containerd[1470]: time="2026-03-02T12:59:18.465626502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"53952361\" in 13.495658438s" Mar 2 12:59:18.473532 containerd[1470]: time="2026-03-02T12:59:18.473482727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\"" Mar 2 12:59:18.481797 containerd[1470]: time="2026-03-02T12:59:18.481352280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 12:59:18.655293 containerd[1470]: time="2026-03-02T12:59:18.654931567Z" level=info msg="CreateContainer within sandbox \"513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 12:59:18.785355 containerd[1470]: time="2026-03-02T12:59:18.780543974Z" level=info msg="CreateContainer within sandbox \"513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d10cfceac3f92bce36bf9b00309631d1a8d6f5a9cabf3221bcfd393fd9e88b5b\"" Mar 2 12:59:18.785355 containerd[1470]: time="2026-03-02T12:59:18.783094145Z" level=info msg="StartContainer for \"d10cfceac3f92bce36bf9b00309631d1a8d6f5a9cabf3221bcfd393fd9e88b5b\"" Mar 2 12:59:18.879130 systemd[1]: Started cri-containerd-d10cfceac3f92bce36bf9b00309631d1a8d6f5a9cabf3221bcfd393fd9e88b5b.scope - libcontainer container d10cfceac3f92bce36bf9b00309631d1a8d6f5a9cabf3221bcfd393fd9e88b5b. Mar 2 12:59:18.966135 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:58050.service - OpenSSH per-connection server daemon (10.0.0.1:58050). Mar 2 12:59:19.100742 containerd[1470]: time="2026-03-02T12:59:19.100610103Z" level=info msg="StartContainer for \"d10cfceac3f92bce36bf9b00309631d1a8d6f5a9cabf3221bcfd393fd9e88b5b\" returns successfully" Mar 2 12:59:19.311502 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 58050 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:19.316059 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:19.336052 systemd-logind[1440]: New session 13 of user core. Mar 2 12:59:19.345707 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 12:59:20.032219 kubelet[2603]: I0302 12:59:20.032096 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d7f6b6d6-x86ql" podStartSLOduration=103.149462409 podStartE2EDuration="2m7.032068264s" podCreationTimestamp="2026-03-02 12:57:13 +0000 UTC" firstStartedPulling="2026-03-02 12:58:41.08514635 +0000 UTC m=+125.113113298" lastFinishedPulling="2026-03-02 12:59:04.967752205 +0000 UTC m=+148.995719153" observedRunningTime="2026-03-02 12:59:06.367398104 +0000 UTC m=+150.395365052" watchObservedRunningTime="2026-03-02 12:59:20.032068264 +0000 UTC m=+164.060035211" Mar 2 12:59:20.033466 kubelet[2603]: I0302 12:59:20.032525 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-867c96dbbc-vh6px" podStartSLOduration=86.94856106 podStartE2EDuration="2m4.032509176s" podCreationTimestamp="2026-03-02 12:57:16 +0000 UTC" firstStartedPulling="2026-03-02 12:58:41.394992222 +0000 UTC m=+125.422959171" lastFinishedPulling="2026-03-02 12:59:18.478940338 +0000 UTC m=+162.506907287" observedRunningTime="2026-03-02 12:59:20.020055264 +0000 UTC m=+164.048022212" watchObservedRunningTime="2026-03-02 12:59:20.032509176 +0000 UTC m=+164.060476124" Mar 2 12:59:21.119473 sshd[5825]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:21.163023 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:58050.service: Deactivated successfully. Mar 2 12:59:21.168015 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 12:59:21.171444 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Mar 2 12:59:21.175025 systemd-logind[1440]: Removed session 13. Mar 2 12:59:22.752023 containerd[1470]: time="2026-03-02T12:59:22.751424097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:22.759996 containerd[1470]: time="2026-03-02T12:59:22.759847903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=14702266" Mar 2 12:59:22.782060 containerd[1470]: time="2026-03-02T12:59:22.781461058Z" level=info msg="ImageCreate event name:\"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:22.787467 containerd[1470]: time="2026-03-02T12:59:22.786940642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:22.789654 containerd[1470]: time="2026-03-02T12:59:22.789597252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"16258263\" in 4.308186194s" Mar 2 12:59:22.789654 containerd[1470]: time="2026-03-02T12:59:22.789649610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\"" Mar 2 12:59:22.793606 containerd[1470]: time="2026-03-02T12:59:22.793525091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:59:22.807239 containerd[1470]: time="2026-03-02T12:59:22.806634669Z" level=info msg="CreateContainer within sandbox \"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 12:59:23.172274 containerd[1470]: time="2026-03-02T12:59:23.171446217Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:23.177470 containerd[1470]: time="2026-03-02T12:59:23.176347449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 12:59:23.191350 containerd[1470]: time="2026-03-02T12:59:23.191176508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 397.577818ms" Mar 2 12:59:23.191350 containerd[1470]: time="2026-03-02T12:59:23.191241738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:59:23.198559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986684803.mount: Deactivated successfully. Mar 2 12:59:23.246223 containerd[1470]: time="2026-03-02T12:59:23.246107800Z" level=info msg="CreateContainer within sandbox \"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"59a0dc6f406045b1be78973326fa28e15dc438fa54534f45a539abe41543287f\"" Mar 2 12:59:23.254032 containerd[1470]: time="2026-03-02T12:59:23.251685974Z" level=info msg="StartContainer for \"59a0dc6f406045b1be78973326fa28e15dc438fa54534f45a539abe41543287f\"" Mar 2 12:59:23.306055 containerd[1470]: time="2026-03-02T12:59:23.304246999Z" level=info msg="CreateContainer within sandbox \"726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:59:23.497722 systemd[1]: Started cri-containerd-59a0dc6f406045b1be78973326fa28e15dc438fa54534f45a539abe41543287f.scope - libcontainer container 59a0dc6f406045b1be78973326fa28e15dc438fa54534f45a539abe41543287f. Mar 2 12:59:23.582810 containerd[1470]: time="2026-03-02T12:59:23.582678287Z" level=info msg="CreateContainer within sandbox \"726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"688e4d1cb8ec5ee0fa649a46e4bac99b81cb07cff885da8c93ae4f29cdfbbafa\"" Mar 2 12:59:23.591778 containerd[1470]: time="2026-03-02T12:59:23.591402221Z" level=info msg="StartContainer for \"688e4d1cb8ec5ee0fa649a46e4bac99b81cb07cff885da8c93ae4f29cdfbbafa\"" Mar 2 12:59:23.646742 kubelet[2603]: E0302 12:59:23.644634 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:23.901031 containerd[1470]: time="2026-03-02T12:59:23.898847048Z" level=info msg="StartContainer for \"59a0dc6f406045b1be78973326fa28e15dc438fa54534f45a539abe41543287f\" returns successfully" Mar 2 12:59:23.953451 systemd[1]: Started cri-containerd-688e4d1cb8ec5ee0fa649a46e4bac99b81cb07cff885da8c93ae4f29cdfbbafa.scope - libcontainer container 688e4d1cb8ec5ee0fa649a46e4bac99b81cb07cff885da8c93ae4f29cdfbbafa. Mar 2 12:59:24.113141 kubelet[2603]: I0302 12:59:24.112923 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ktfwf" podStartSLOduration=83.724773413 podStartE2EDuration="2m8.112858986s" podCreationTimestamp="2026-03-02 12:57:16 +0000 UTC" firstStartedPulling="2026-03-02 12:58:38.404053596 +0000 UTC m=+122.432020554" lastFinishedPulling="2026-03-02 12:59:22.79213918 +0000 UTC m=+166.820106127" observedRunningTime="2026-03-02 12:59:24.106990523 +0000 UTC m=+168.134957481" watchObservedRunningTime="2026-03-02 12:59:24.112858986 +0000 UTC m=+168.140825954" Mar 2 12:59:24.354016 containerd[1470]: time="2026-03-02T12:59:24.353143281Z" level=info msg="StartContainer for \"688e4d1cb8ec5ee0fa649a46e4bac99b81cb07cff885da8c93ae4f29cdfbbafa\" returns successfully" Mar 2 12:59:24.662088 kubelet[2603]: E0302 12:59:24.657776 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:25.224052 kubelet[2603]: I0302 12:59:25.217537 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6778944f75-mnf89" podStartSLOduration=91.555060651 podStartE2EDuration="2m12.217507374s" podCreationTimestamp="2026-03-02 12:57:13 +0000 UTC" firstStartedPulling="2026-03-02 12:58:42.532803114 +0000 UTC m=+126.560770062" lastFinishedPulling="2026-03-02 12:59:23.195249836 +0000 UTC m=+167.223216785" observedRunningTime="2026-03-02 12:59:25.212384579 +0000 UTC m=+169.240351537" watchObservedRunningTime="2026-03-02 12:59:25.217507374 +0000 UTC m=+169.245474362" Mar 2 12:59:25.504944 kubelet[2603]: I0302 12:59:25.504037 2603 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 12:59:25.533101 kubelet[2603]: I0302 12:59:25.530874 2603 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 12:59:26.172675 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:58066.service - OpenSSH per-connection server daemon (10.0.0.1:58066). Mar 2 12:59:26.390017 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 58066 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:26.401173 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:26.465079 systemd-logind[1440]: New session 14 of user core. Mar 2 12:59:26.505375 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 12:59:27.207208 kubelet[2603]: I0302 12:59:27.206918 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:59:27.286127 sshd[6026]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:27.292773 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:58066.service: Deactivated successfully. Mar 2 12:59:27.297627 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 12:59:27.299866 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Mar 2 12:59:27.302957 systemd-logind[1440]: Removed session 14. Mar 2 12:59:32.354356 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:58954.service - OpenSSH per-connection server daemon (10.0.0.1:58954). Mar 2 12:59:32.855097 sshd[6046]: Accepted publickey for core from 10.0.0.1 port 58954 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:32.872159 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:32.911193 systemd-logind[1440]: New session 15 of user core. Mar 2 12:59:32.951959 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 12:59:33.920264 sshd[6046]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:33.961666 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:58954.service: Deactivated successfully. Mar 2 12:59:33.978971 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 12:59:33.984523 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Mar 2 12:59:33.988777 systemd-logind[1440]: Removed session 15. Mar 2 12:59:38.849032 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:51532.service - OpenSSH per-connection server daemon (10.0.0.1:51532). Mar 2 12:59:38.943341 sshd[6086]: Accepted publickey for core from 10.0.0.1 port 51532 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:38.946253 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:38.967824 systemd-logind[1440]: New session 16 of user core. Mar 2 12:59:38.980549 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 12:59:39.038371 containerd[1470]: time="2026-03-02T12:59:39.037872789Z" level=info msg="StopPodSandbox for \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\"" Mar 2 12:59:39.670836 sshd[6086]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:39.699942 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:51532.service: Deactivated successfully. Mar 2 12:59:39.712073 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 12:59:39.736659 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Mar 2 12:59:39.767952 systemd-logind[1440]: Removed session 16. Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:39.496 [WARNING][6108] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"83de2541-d5e5-4ff8-9f13-b519d8c21fab", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0", Pod:"calico-apiserver-6778944f75-frr9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali97369de38f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:39.498 [INFO][6108] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:39.498 [INFO][6108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" iface="eth0" netns="" Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:39.498 [INFO][6108] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:39.498 [INFO][6108] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:40.257 [INFO][6117] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:40.262 [INFO][6117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:40.263 [INFO][6117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:40.301 [WARNING][6117] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:40.301 [INFO][6117] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:40.313 [INFO][6117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:40.356858 containerd[1470]: 2026-03-02 12:59:40.342 [INFO][6108] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.386836 containerd[1470]: time="2026-03-02T12:59:40.385010285Z" level=info msg="TearDown network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\" successfully" Mar 2 12:59:40.386836 containerd[1470]: time="2026-03-02T12:59:40.385820485Z" level=info msg="StopPodSandbox for \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\" returns successfully" Mar 2 12:59:40.389430 containerd[1470]: time="2026-03-02T12:59:40.388827649Z" level=info msg="RemovePodSandbox for \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\"" Mar 2 12:59:40.389529 containerd[1470]: time="2026-03-02T12:59:40.389464886Z" level=info msg="Forcibly stopping sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\"" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.578 [WARNING][6136] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"83de2541-d5e5-4ff8-9f13-b519d8c21fab", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efafbd020539e34c23b7c437eb0740839ea6149db1e87882d3d89f0a5177b3d0", Pod:"calico-apiserver-6778944f75-frr9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali97369de38f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.578 [INFO][6136] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.578 [INFO][6136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" iface="eth0" netns="" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.578 [INFO][6136] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.578 [INFO][6136] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.677 [INFO][6144] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.679 [INFO][6144] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.679 [INFO][6144] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.694 [WARNING][6144] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.694 [INFO][6144] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" HandleID="k8s-pod-network.a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Workload="localhost-k8s-calico--apiserver--6778944f75--frr9d-eth0" Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.700 [INFO][6144] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:40.718071 containerd[1470]: 2026-03-02 12:59:40.709 [INFO][6136] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1" Mar 2 12:59:40.718071 containerd[1470]: time="2026-03-02T12:59:40.716467767Z" level=info msg="TearDown network for sandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\" successfully" Mar 2 12:59:40.752462 containerd[1470]: time="2026-03-02T12:59:40.752359422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:40.752462 containerd[1470]: time="2026-03-02T12:59:40.752505995Z" level=info msg="RemovePodSandbox \"a16773a0f0bf1b2b6430869964e4d2eb0136fc9bbe2f57c0123b81b5e15460e1\" returns successfully" Mar 2 12:59:40.753701 containerd[1470]: time="2026-03-02T12:59:40.753462882Z" level=info msg="StopPodSandbox for \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\"" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:40.959 [WARNING][6162] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"9e246d66-0a6c-4a48-969d-096fb291a66e", ResourceVersion:"1474", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6", Pod:"calico-apiserver-6778944f75-mnf89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali58d682b7fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:40.959 [INFO][6162] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:40.965 [INFO][6162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" iface="eth0" netns="" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:40.965 [INFO][6162] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:40.965 [INFO][6162] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:41.140 [INFO][6171] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:41.141 [INFO][6171] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:41.145 [INFO][6171] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:41.176 [WARNING][6171] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:41.176 [INFO][6171] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:41.186 [INFO][6171] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:41.218566 containerd[1470]: 2026-03-02 12:59:41.204 [INFO][6162] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.218566 containerd[1470]: time="2026-03-02T12:59:41.214120182Z" level=info msg="TearDown network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\" successfully" Mar 2 12:59:41.218566 containerd[1470]: time="2026-03-02T12:59:41.214157432Z" level=info msg="StopPodSandbox for \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\" returns successfully" Mar 2 12:59:41.218566 containerd[1470]: time="2026-03-02T12:59:41.215427388Z" level=info msg="RemovePodSandbox for \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\"" Mar 2 12:59:41.218566 containerd[1470]: time="2026-03-02T12:59:41.215477121Z" level=info msg="Forcibly stopping sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\"" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.446 [WARNING][6188] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0", GenerateName:"calico-apiserver-6778944f75-", Namespace:"calico-system", SelfLink:"", UID:"9e246d66-0a6c-4a48-969d-096fb291a66e", ResourceVersion:"1474", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778944f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"726277cfe648df5c11d20482f8ad21458186a3fa36c5dfa4c16cf635443b50f6", Pod:"calico-apiserver-6778944f75-mnf89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali58d682b7fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.446 [INFO][6188] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.446 [INFO][6188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" iface="eth0" netns="" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.446 [INFO][6188] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.446 [INFO][6188] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.564 [INFO][6196] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.566 [INFO][6196] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.568 [INFO][6196] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.591 [WARNING][6196] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.591 [INFO][6196] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" HandleID="k8s-pod-network.641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Workload="localhost-k8s-calico--apiserver--6778944f75--mnf89-eth0" Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.602 [INFO][6196] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:41.636465 containerd[1470]: 2026-03-02 12:59:41.617 [INFO][6188] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca" Mar 2 12:59:41.640720 containerd[1470]: time="2026-03-02T12:59:41.636469075Z" level=info msg="TearDown network for sandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\" successfully" Mar 2 12:59:41.644262 containerd[1470]: time="2026-03-02T12:59:41.643790719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:41.644262 containerd[1470]: time="2026-03-02T12:59:41.643947230Z" level=info msg="RemovePodSandbox \"641eca8b95e5af08f4b5c9fc73632bc59d64d85b03a3a49f7c1a7d2a6ae8c3ca\" returns successfully" Mar 2 12:59:41.645828 containerd[1470]: time="2026-03-02T12:59:41.645248275Z" level=info msg="StopPodSandbox for \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\"" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.835 [WARNING][6212] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lt57c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"82df1e0b-93ca-4d37-95cc-8fb21c222566", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a", Pod:"coredns-66bc5c9577-lt57c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec77602dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.836 [INFO][6212] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.836 [INFO][6212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" iface="eth0" netns="" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.836 [INFO][6212] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.836 [INFO][6212] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.972 [INFO][6221] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.972 [INFO][6221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:41.972 [INFO][6221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:42.019 [WARNING][6221] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:42.019 [INFO][6221] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:42.136 [INFO][6221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:42.173841 containerd[1470]: 2026-03-02 12:59:42.153 [INFO][6212] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:42.173841 containerd[1470]: time="2026-03-02T12:59:42.173631583Z" level=info msg="TearDown network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\" successfully" Mar 2 12:59:42.173841 containerd[1470]: time="2026-03-02T12:59:42.173677408Z" level=info msg="StopPodSandbox for \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\" returns successfully" Mar 2 12:59:42.185423 containerd[1470]: time="2026-03-02T12:59:42.175921951Z" level=info msg="RemovePodSandbox for \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\"" Mar 2 12:59:42.185423 containerd[1470]: time="2026-03-02T12:59:42.175963127Z" level=info msg="Forcibly stopping sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\"" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.119 [WARNING][6239] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lt57c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"82df1e0b-93ca-4d37-95cc-8fb21c222566", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61e6abe21d4d4be163afb05779c28c762496944e57f33c4f73871f14955b540a", Pod:"coredns-66bc5c9577-lt57c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec77602dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.145 [INFO][6239] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.145 [INFO][6239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" iface="eth0" netns="" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.145 [INFO][6239] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.145 [INFO][6239] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.299 [INFO][6248] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.303 [INFO][6248] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.303 [INFO][6248] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.371 [WARNING][6248] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.372 [INFO][6248] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" HandleID="k8s-pod-network.106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Workload="localhost-k8s-coredns--66bc5c9577--lt57c-eth0" Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.385 [INFO][6248] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:43.431597 containerd[1470]: 2026-03-02 12:59:43.393 [INFO][6239] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743" Mar 2 12:59:43.431597 containerd[1470]: time="2026-03-02T12:59:43.429585482Z" level=info msg="TearDown network for sandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\" successfully" Mar 2 12:59:43.458076 containerd[1470]: time="2026-03-02T12:59:43.457858809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:43.458076 containerd[1470]: time="2026-03-02T12:59:43.458045577Z" level=info msg="RemovePodSandbox \"106d5c31ae6dbbc196e9bb2c56ae27623fae18dd2626edfe78a4b4a77b4c3743\" returns successfully" Mar 2 12:59:43.459142 containerd[1470]: time="2026-03-02T12:59:43.458906972Z" level=info msg="StopPodSandbox for \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\"" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.351 [WARNING][6266] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktfwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6027383d-96c9-465b-88ba-00723209fa19", ResourceVersion:"1449", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6db5596769", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936", Pod:"csi-node-driver-ktfwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbc0bc5b616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.351 [INFO][6266] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.351 [INFO][6266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" iface="eth0" netns="" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.351 [INFO][6266] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.351 [INFO][6266] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.401 [INFO][6274] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.401 [INFO][6274] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.401 [INFO][6274] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.415 [WARNING][6274] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.415 [INFO][6274] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.464 [INFO][6274] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:44.474539 containerd[1470]: 2026-03-02 12:59:44.468 [INFO][6266] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:44.474539 containerd[1470]: time="2026-03-02T12:59:44.474163850Z" level=info msg="TearDown network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\" successfully" Mar 2 12:59:44.474539 containerd[1470]: time="2026-03-02T12:59:44.474205297Z" level=info msg="StopPodSandbox for \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\" returns successfully" Mar 2 12:59:44.481064 containerd[1470]: time="2026-03-02T12:59:44.478553386Z" level=info msg="RemovePodSandbox for \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\"" Mar 2 12:59:44.481064 containerd[1470]: time="2026-03-02T12:59:44.478596437Z" level=info msg="Forcibly stopping sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\"" Mar 2 12:59:44.718472 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:51548.service - OpenSSH per-connection server daemon (10.0.0.1:51548). Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.663 [WARNING][6291] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktfwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6027383d-96c9-465b-88ba-00723209fa19", ResourceVersion:"1449", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6db5596769", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1af80831a02e1242e07759fa89c4a6f985e3ba96ac2232fd0caf9c6e2684936", Pod:"csi-node-driver-ktfwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbc0bc5b616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.664 [INFO][6291] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.664 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" iface="eth0" netns="" Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.664 [INFO][6291] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.664 [INFO][6291] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.965 [INFO][6299] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.966 [INFO][6299] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:44.967 [INFO][6299] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:45.032 [WARNING][6299] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:45.032 [INFO][6299] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" HandleID="k8s-pod-network.f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Workload="localhost-k8s-csi--node--driver--ktfwf-eth0" Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:45.061 [INFO][6299] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:45.075104 containerd[1470]: 2026-03-02 12:59:45.068 [INFO][6291] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d" Mar 2 12:59:45.075104 containerd[1470]: time="2026-03-02T12:59:45.073520364Z" level=info msg="TearDown network for sandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\" successfully" Mar 2 12:59:45.086445 containerd[1470]: time="2026-03-02T12:59:45.086048119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:45.086445 containerd[1470]: time="2026-03-02T12:59:45.086262588Z" level=info msg="RemovePodSandbox \"f8d04e0d10884a0fab4485c2244af30f74cbe461b33c662779b5fbc1dadef66d\" returns successfully" Mar 2 12:59:45.088096 containerd[1470]: time="2026-03-02T12:59:45.087357689Z" level=info msg="StopPodSandbox for \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\"" Mar 2 12:59:45.112158 sshd[6305]: Accepted publickey for core from 10.0.0.1 port 51548 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:45.119671 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:45.169474 systemd-logind[1440]: New session 17 of user core. Mar 2 12:59:45.180784 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.398 [WARNING][6320] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0", GenerateName:"calico-kube-controllers-867c96dbbc-", Namespace:"calico-system", SelfLink:"", UID:"eb7eb2f6-04fe-4439-a025-db21095b18ae", ResourceVersion:"1430", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"867c96dbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5", Pod:"calico-kube-controllers-867c96dbbc-vh6px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9027afd2620", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.398 [INFO][6320] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.398 [INFO][6320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" iface="eth0" netns="" Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.398 [INFO][6320] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.398 [INFO][6320] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.488 [INFO][6335] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.490 [INFO][6335] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.490 [INFO][6335] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.552 [WARNING][6335] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.552 [INFO][6335] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.566 [INFO][6335] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:45.597090 containerd[1470]: 2026-03-02 12:59:45.579 [INFO][6320] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:45.601169 containerd[1470]: time="2026-03-02T12:59:45.597383299Z" level=info msg="TearDown network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\" successfully" Mar 2 12:59:45.601169 containerd[1470]: time="2026-03-02T12:59:45.597426319Z" level=info msg="StopPodSandbox for \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\" returns successfully" Mar 2 12:59:45.601169 containerd[1470]: time="2026-03-02T12:59:45.598973332Z" level=info msg="RemovePodSandbox for \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\"" Mar 2 12:59:45.601169 containerd[1470]: time="2026-03-02T12:59:45.599017355Z" level=info msg="Forcibly stopping sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\"" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.867 [WARNING][6357] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0", GenerateName:"calico-kube-controllers-867c96dbbc-", Namespace:"calico-system", SelfLink:"", UID:"eb7eb2f6-04fe-4439-a025-db21095b18ae", ResourceVersion:"1430", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"867c96dbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"513b0d489eb1c2a168cc22b9fef79fba94e45290294a9211f6289f4298f802a5", Pod:"calico-kube-controllers-867c96dbbc-vh6px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9027afd2620", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.871 [INFO][6357] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.872 [INFO][6357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" iface="eth0" netns="" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.872 [INFO][6357] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.872 [INFO][6357] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.973 [INFO][6367] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.974 [INFO][6367] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.975 [INFO][6367] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.991 [WARNING][6367] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:45.991 [INFO][6367] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" HandleID="k8s-pod-network.b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Workload="localhost-k8s-calico--kube--controllers--867c96dbbc--vh6px-eth0" Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:46.003 [INFO][6367] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:46.029256 containerd[1470]: 2026-03-02 12:59:46.016 [INFO][6357] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187" Mar 2 12:59:46.029256 containerd[1470]: time="2026-03-02T12:59:46.027094005Z" level=info msg="TearDown network for sandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\" successfully" Mar 2 12:59:46.052110 containerd[1470]: time="2026-03-02T12:59:46.049759314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:46.052110 containerd[1470]: time="2026-03-02T12:59:46.049929500Z" level=info msg="RemovePodSandbox \"b9bb2bf440c4581798a7a4651e671168087fcebf093afaeee7db28842c071187\" returns successfully" Mar 2 12:59:46.052110 containerd[1470]: time="2026-03-02T12:59:46.050982142Z" level=info msg="StopPodSandbox for \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\"" Mar 2 12:59:46.460957 sshd[6305]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:46.480466 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:51548.service: Deactivated successfully. Mar 2 12:59:46.485021 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.354 [WARNING][6385] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"e94766e6-0a94-44da-adf7-e646b07431ce", ResourceVersion:"1502", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e", Pod:"goldmane-54d7f6b6d6-x86ql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali359911602cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.354 [INFO][6385] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.354 [INFO][6385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" iface="eth0" netns="" Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.354 [INFO][6385] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.354 [INFO][6385] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.445 [INFO][6395] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.445 [INFO][6395] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.446 [INFO][6395] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.464 [WARNING][6395] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.464 [INFO][6395] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.476 [INFO][6395] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:46.488048 containerd[1470]: 2026-03-02 12:59:46.481 [INFO][6385] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.489395 containerd[1470]: time="2026-03-02T12:59:46.488099587Z" level=info msg="TearDown network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\" successfully" Mar 2 12:59:46.489395 containerd[1470]: time="2026-03-02T12:59:46.488137658Z" level=info msg="StopPodSandbox for \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\" returns successfully" Mar 2 12:59:46.489395 containerd[1470]: time="2026-03-02T12:59:46.489349176Z" level=info msg="RemovePodSandbox for \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\"" Mar 2 12:59:46.489395 containerd[1470]: time="2026-03-02T12:59:46.489375144Z" level=info msg="Forcibly stopping sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\"" Mar 2 12:59:46.488163 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Mar 2 12:59:46.491801 systemd-logind[1440]: Removed session 17. Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.695 [WARNING][6415] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"e94766e6-0a94-44da-adf7-e646b07431ce", ResourceVersion:"1502", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7233b36590d31419060b34f23dafb4fd0fd0d640af03b0ddd08d2d396b07412e", Pod:"goldmane-54d7f6b6d6-x86ql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali359911602cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.696 [INFO][6415] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.696 [INFO][6415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" iface="eth0" netns="" Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.696 [INFO][6415] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.696 [INFO][6415] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.808 [INFO][6424] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.808 [INFO][6424] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.810 [INFO][6424] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.848 [WARNING][6424] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.848 [INFO][6424] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" HandleID="k8s-pod-network.4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Workload="localhost-k8s-goldmane--54d7f6b6d6--x86ql-eth0" Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.856 [INFO][6424] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:46.970438 containerd[1470]: 2026-03-02 12:59:46.946 [INFO][6415] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9" Mar 2 12:59:46.979829 containerd[1470]: time="2026-03-02T12:59:46.974061715Z" level=info msg="TearDown network for sandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\" successfully" Mar 2 12:59:47.039727 containerd[1470]: time="2026-03-02T12:59:47.038685486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:47.039727 containerd[1470]: time="2026-03-02T12:59:47.038976078Z" level=info msg="RemovePodSandbox \"4bb263b94ed9fb317baa5d2fa9d04ed4dd9bac735478db5dda0087169f6f39c9\" returns successfully" Mar 2 12:59:47.041907 containerd[1470]: time="2026-03-02T12:59:47.041827903Z" level=info msg="StopPodSandbox for \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\"" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.318 [WARNING][6441] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4pk99-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a17f4e37-d122-4cb7-a268-77a8710e322d", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68", Pod:"coredns-66bc5c9577-4pk99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9957d044f00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.321 [INFO][6441] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.321 [INFO][6441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" iface="eth0" netns="" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.334 [INFO][6441] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.334 [INFO][6441] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.451 [INFO][6449] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.452 [INFO][6449] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.452 [INFO][6449] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.478 [WARNING][6449] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.478 [INFO][6449] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.483 [INFO][6449] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:47.508198 containerd[1470]: 2026-03-02 12:59:47.497 [INFO][6441] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:47.508198 containerd[1470]: time="2026-03-02T12:59:47.506975469Z" level=info msg="TearDown network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\" successfully" Mar 2 12:59:47.508198 containerd[1470]: time="2026-03-02T12:59:47.507012719Z" level=info msg="StopPodSandbox for \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\" returns successfully" Mar 2 12:59:47.544121 containerd[1470]: time="2026-03-02T12:59:47.543991392Z" level=info msg="RemovePodSandbox for \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\"" Mar 2 12:59:47.544121 containerd[1470]: time="2026-03-02T12:59:47.544119770Z" level=info msg="Forcibly stopping sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\"" Mar 2 12:59:47.624508 kubelet[2603]: E0302 12:59:47.623835 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.124 [WARNING][6467] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4pk99-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a17f4e37-d122-4cb7-a268-77a8710e322d", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57d72188138e2b6e943f4e8ce1d7370a23c5f775af35fa4a5ae06a1f31ea8e68", Pod:"coredns-66bc5c9577-4pk99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9957d044f00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.146 [INFO][6467] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.146 [INFO][6467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" iface="eth0" netns="" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.146 [INFO][6467] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.146 [INFO][6467] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.253 [INFO][6475] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.253 [INFO][6475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.253 [INFO][6475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.559 [WARNING][6475] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.560 [INFO][6475] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" HandleID="k8s-pod-network.14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Workload="localhost-k8s-coredns--66bc5c9577--4pk99-eth0" Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.571 [INFO][6475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:48.583727 containerd[1470]: 2026-03-02 12:59:48.578 [INFO][6467] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0" Mar 2 12:59:48.583727 containerd[1470]: time="2026-03-02T12:59:48.583115551Z" level=info msg="TearDown network for sandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\" successfully" Mar 2 12:59:48.609920 containerd[1470]: time="2026-03-02T12:59:48.609273631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:48.609920 containerd[1470]: time="2026-03-02T12:59:48.609497619Z" level=info msg="RemovePodSandbox \"14ee8ec068ed8d2decc20888b0889296903138113d6ceec40a199a54ef8207a0\" returns successfully" Mar 2 12:59:51.638512 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:45204.service - OpenSSH per-connection server daemon (10.0.0.1:45204). Mar 2 12:59:51.834707 sshd[6506]: Accepted publickey for core from 10.0.0.1 port 45204 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:51.839752 sshd[6506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:51.883229 systemd-logind[1440]: New session 18 of user core. Mar 2 12:59:51.887271 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 12:59:52.754726 sshd[6506]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:52.775900 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:45204.service: Deactivated successfully. Mar 2 12:59:52.780767 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 12:59:52.786522 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Mar 2 12:59:52.790746 systemd-logind[1440]: Removed session 18. Mar 2 12:59:57.786709 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:45212.service - OpenSSH per-connection server daemon (10.0.0.1:45212). Mar 2 12:59:57.904386 sshd[6556]: Accepted publickey for core from 10.0.0.1 port 45212 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:57.907884 sshd[6556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:57.940604 systemd-logind[1440]: New session 19 of user core. Mar 2 12:59:57.949124 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 12:59:58.548678 sshd[6556]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:58.560626 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:45212.service: Deactivated successfully. Mar 2 12:59:58.566454 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 12:59:58.572223 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Mar 2 12:59:58.585886 systemd-logind[1440]: Removed session 19. Mar 2 13:00:02.647001 kubelet[2603]: E0302 13:00:02.645020 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:03.598788 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:41544.service - OpenSSH per-connection server daemon (10.0.0.1:41544). Mar 2 13:00:03.819349 sshd[6593]: Accepted publickey for core from 10.0.0.1 port 41544 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:03.821899 sshd[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:03.876667 systemd-logind[1440]: New session 20 of user core. Mar 2 13:00:03.887648 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:00:04.393745 sshd[6593]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:04.410886 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:00:04.414888 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:41544.service: Deactivated successfully. Mar 2 13:00:04.452631 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:00:04.461786 systemd-logind[1440]: Removed session 20. Mar 2 13:00:05.631875 kubelet[2603]: E0302 13:00:05.631618 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:09.420562 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:58900.service - OpenSSH per-connection server daemon (10.0.0.1:58900). Mar 2 13:00:09.498391 sshd[6648]: Accepted publickey for core from 10.0.0.1 port 58900 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:09.501661 sshd[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:09.605017 systemd-logind[1440]: New session 21 of user core. Mar 2 13:00:09.624491 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:00:10.171550 sshd[6648]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:10.182365 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:58900.service: Deactivated successfully. Mar 2 13:00:10.186950 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:00:10.189676 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:00:10.192699 systemd-logind[1440]: Removed session 21. Mar 2 13:00:13.623851 kubelet[2603]: E0302 13:00:13.619121 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:15.225828 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:58912.service - OpenSSH per-connection server daemon (10.0.0.1:58912). Mar 2 13:00:15.344390 sshd[6664]: Accepted publickey for core from 10.0.0.1 port 58912 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:15.350549 sshd[6664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:15.403894 systemd-logind[1440]: New session 22 of user core. Mar 2 13:00:15.420752 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:00:15.764392 sshd[6664]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:15.805890 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:58912.service: Deactivated successfully. Mar 2 13:00:15.816265 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:00:15.826537 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:00:15.837967 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:58916.service - OpenSSH per-connection server daemon (10.0.0.1:58916). Mar 2 13:00:15.840154 systemd-logind[1440]: Removed session 22. Mar 2 13:00:15.942102 sshd[6681]: Accepted publickey for core from 10.0.0.1 port 58916 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:15.943392 sshd[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:15.985929 systemd-logind[1440]: New session 23 of user core. Mar 2 13:00:16.008231 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:00:16.445372 sshd[6681]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:16.462068 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:58916.service: Deactivated successfully. Mar 2 13:00:16.465091 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:00:16.480263 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:00:16.505196 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:58922.service - OpenSSH per-connection server daemon (10.0.0.1:58922). Mar 2 13:00:16.517594 systemd-logind[1440]: Removed session 23. Mar 2 13:00:16.748043 sshd[6693]: Accepted publickey for core from 10.0.0.1 port 58922 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:16.750643 sshd[6693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:16.802001 systemd-logind[1440]: New session 24 of user core. Mar 2 13:00:16.813690 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:00:18.364959 sshd[6693]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:18.376042 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:58922.service: Deactivated successfully. Mar 2 13:00:18.380562 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:00:18.382570 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:00:18.390178 systemd-logind[1440]: Removed session 24. Mar 2 13:00:19.633148 kubelet[2603]: E0302 13:00:19.618646 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:23.457052 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:51466.service - OpenSSH per-connection server daemon (10.0.0.1:51466). Mar 2 13:00:23.930625 sshd[6819]: Accepted publickey for core from 10.0.0.1 port 51466 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:23.945398 sshd[6819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:23.997532 systemd-logind[1440]: New session 25 of user core. Mar 2 13:00:24.020443 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:00:25.233353 sshd[6819]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:25.239537 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:51466.service: Deactivated successfully. Mar 2 13:00:25.243444 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:00:25.245078 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:00:25.247711 systemd-logind[1440]: Removed session 25. Mar 2 13:00:30.306537 systemd[1]: Started sshd@25-10.0.0.12:22-10.0.0.1:49908.service - OpenSSH per-connection server daemon (10.0.0.1:49908). Mar 2 13:00:30.513369 sshd[6835]: Accepted publickey for core from 10.0.0.1 port 49908 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:30.518361 sshd[6835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:30.550788 systemd-logind[1440]: New session 26 of user core. Mar 2 13:00:30.569532 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:00:31.411450 sshd[6835]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:31.448756 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:00:31.449791 systemd[1]: sshd@25-10.0.0.12:22-10.0.0.1:49908.service: Deactivated successfully. Mar 2 13:00:31.464506 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:00:31.474860 systemd-logind[1440]: Removed session 26. Mar 2 13:00:35.627847 kubelet[2603]: E0302 13:00:35.622158 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:36.701950 systemd[1]: Started sshd@26-10.0.0.12:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Mar 2 13:00:36.993556 sshd[6850]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:36.998607 sshd[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:37.010059 systemd-logind[1440]: New session 27 of user core. Mar 2 13:00:37.019595 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:00:37.397743 sshd[6850]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:37.405200 systemd[1]: sshd@26-10.0.0.12:22-10.0.0.1:49918.service: Deactivated successfully. Mar 2 13:00:37.412089 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:00:37.414710 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:00:37.418141 systemd-logind[1440]: Removed session 27. Mar 2 13:00:42.497883 systemd[1]: Started sshd@27-10.0.0.12:22-10.0.0.1:50088.service - OpenSSH per-connection server daemon (10.0.0.1:50088). Mar 2 13:00:42.661359 sshd[6888]: Accepted publickey for core from 10.0.0.1 port 50088 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:42.664617 sshd[6888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:42.696125 systemd-logind[1440]: New session 28 of user core. Mar 2 13:00:42.728188 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:00:43.380028 sshd[6888]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:43.396385 systemd[1]: sshd@27-10.0.0.12:22-10.0.0.1:50088.service: Deactivated successfully. Mar 2 13:00:43.406570 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:00:43.412507 systemd-logind[1440]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:00:43.426807 systemd[1]: Started sshd@28-10.0.0.12:22-10.0.0.1:50094.service - OpenSSH per-connection server daemon (10.0.0.1:50094). Mar 2 13:00:43.430779 systemd-logind[1440]: Removed session 28. Mar 2 13:00:43.553586 sshd[6903]: Accepted publickey for core from 10.0.0.1 port 50094 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:43.567467 sshd[6903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:43.595409 systemd-logind[1440]: New session 29 of user core. Mar 2 13:00:43.626489 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:00:45.649420 sshd[6903]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:45.673089 systemd[1]: sshd@28-10.0.0.12:22-10.0.0.1:50094.service: Deactivated successfully. Mar 2 13:00:45.680387 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:00:45.703743 systemd-logind[1440]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:00:45.737687 systemd[1]: Started sshd@29-10.0.0.12:22-10.0.0.1:50108.service - OpenSSH per-connection server daemon (10.0.0.1:50108). Mar 2 13:00:45.747440 systemd-logind[1440]: Removed session 29. Mar 2 13:00:46.962415 sshd[6917]: Accepted publickey for core from 10.0.0.1 port 50108 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:46.975222 sshd[6917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:46.999665 systemd-logind[1440]: New session 30 of user core. Mar 2 13:00:47.018760 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 13:00:49.363771 sshd[6917]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:49.429224 systemd[1]: sshd@29-10.0.0.12:22-10.0.0.1:50108.service: Deactivated successfully. Mar 2 13:00:49.441772 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 13:00:49.442133 systemd[1]: session-30.scope: Consumed 1.155s CPU time. Mar 2 13:00:49.451850 systemd-logind[1440]: Session 30 logged out. Waiting for processes to exit. Mar 2 13:00:49.462794 systemd[1]: Started sshd@30-10.0.0.12:22-10.0.0.1:55330.service - OpenSSH per-connection server daemon (10.0.0.1:55330). Mar 2 13:00:49.479363 systemd-logind[1440]: Removed session 30. Mar 2 13:00:49.620092 sshd[6948]: Accepted publickey for core from 10.0.0.1 port 55330 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:49.645534 sshd[6948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:49.682238 systemd-logind[1440]: New session 31 of user core. Mar 2 13:00:49.698885 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 13:00:50.640794 kubelet[2603]: E0302 13:00:50.640123 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:52.443669 sshd[6948]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:52.480209 systemd[1]: sshd@30-10.0.0.12:22-10.0.0.1:55330.service: Deactivated successfully. Mar 2 13:00:52.493663 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 13:00:52.501397 systemd[1]: session-31.scope: Consumed 1.021s CPU time. Mar 2 13:00:52.511918 systemd-logind[1440]: Session 31 logged out. Waiting for processes to exit. Mar 2 13:00:52.532764 systemd[1]: Started sshd@31-10.0.0.12:22-10.0.0.1:55334.service - OpenSSH per-connection server daemon (10.0.0.1:55334). Mar 2 13:00:52.542541 systemd-logind[1440]: Removed session 31. Mar 2 13:00:52.720482 sshd[6982]: Accepted publickey for core from 10.0.0.1 port 55334 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:52.729852 sshd[6982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:52.769725 systemd-logind[1440]: New session 32 of user core. Mar 2 13:00:52.783800 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 13:00:53.291781 sshd[6982]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:53.369990 systemd[1]: sshd@31-10.0.0.12:22-10.0.0.1:55334.service: Deactivated successfully. Mar 2 13:00:53.377730 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 13:00:53.386867 systemd-logind[1440]: Session 32 logged out. Waiting for processes to exit. Mar 2 13:00:53.393262 systemd-logind[1440]: Removed session 32. Mar 2 13:00:58.377821 systemd[1]: Started sshd@32-10.0.0.12:22-10.0.0.1:55336.service - OpenSSH per-connection server daemon (10.0.0.1:55336). Mar 2 13:00:58.512488 sshd[7018]: Accepted publickey for core from 10.0.0.1 port 55336 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:58.516827 sshd[7018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:58.572529 systemd-logind[1440]: New session 33 of user core. Mar 2 13:00:58.590607 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 13:00:59.069616 sshd[7018]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:59.079624 systemd[1]: sshd@32-10.0.0.12:22-10.0.0.1:55336.service: Deactivated successfully. Mar 2 13:00:59.090151 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 13:00:59.097178 systemd-logind[1440]: Session 33 logged out. Waiting for processes to exit. Mar 2 13:00:59.100116 systemd-logind[1440]: Removed session 33. Mar 2 13:01:04.087139 systemd[1]: Started sshd@33-10.0.0.12:22-10.0.0.1:44662.service - OpenSSH per-connection server daemon (10.0.0.1:44662). Mar 2 13:01:04.213119 sshd[7032]: Accepted publickey for core from 10.0.0.1 port 44662 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:04.221832 sshd[7032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:04.261772 systemd-logind[1440]: New session 34 of user core. Mar 2 13:01:04.271797 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 13:01:04.708857 sshd[7032]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:04.721987 systemd[1]: sshd@33-10.0.0.12:22-10.0.0.1:44662.service: Deactivated successfully. Mar 2 13:01:04.729497 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 13:01:04.739100 systemd-logind[1440]: Session 34 logged out. Waiting for processes to exit. Mar 2 13:01:04.746042 systemd-logind[1440]: Removed session 34. Mar 2 13:01:08.644051 kubelet[2603]: E0302 13:01:08.643447 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:09.811179 systemd[1]: Started sshd@34-10.0.0.12:22-10.0.0.1:50742.service - OpenSSH per-connection server daemon (10.0.0.1:50742). Mar 2 13:01:09.897141 sshd[7068]: Accepted publickey for core from 10.0.0.1 port 50742 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:09.901208 sshd[7068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:09.943890 systemd-logind[1440]: New session 35 of user core. Mar 2 13:01:09.952598 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 13:01:10.818487 sshd[7068]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:10.966747 systemd[1]: sshd@34-10.0.0.12:22-10.0.0.1:50742.service: Deactivated successfully. Mar 2 13:01:10.979365 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 13:01:10.984898 systemd-logind[1440]: Session 35 logged out. Waiting for processes to exit. Mar 2 13:01:10.987117 systemd-logind[1440]: Removed session 35. Mar 2 13:01:11.634150 kubelet[2603]: E0302 13:01:11.619486 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:15.768482 systemd[1]: Started sshd@35-10.0.0.12:22-10.0.0.1:50756.service - OpenSSH per-connection server daemon (10.0.0.1:50756). Mar 2 13:01:16.020704 sshd[7093]: Accepted publickey for core from 10.0.0.1 port 50756 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:16.022246 sshd[7093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:16.056246 systemd-logind[1440]: New session 36 of user core. Mar 2 13:01:16.061715 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 13:01:16.638120 kubelet[2603]: E0302 13:01:16.620740 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:16.707403 sshd[7093]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:16.722257 systemd[1]: sshd@35-10.0.0.12:22-10.0.0.1:50756.service: Deactivated successfully. Mar 2 13:01:16.746913 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 13:01:16.749576 systemd-logind[1440]: Session 36 logged out. Waiting for processes to exit. Mar 2 13:01:16.754681 systemd-logind[1440]: Removed session 36. Mar 2 13:01:17.620498 kubelet[2603]: E0302 13:01:17.618867 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:21.216822 systemd[1]: run-containerd-runc-k8s.io-d10cfceac3f92bce36bf9b00309631d1a8d6f5a9cabf3221bcfd393fd9e88b5b-runc.xzvaWU.mount: Deactivated successfully. Mar 2 13:01:21.772026 systemd[1]: Started sshd@36-10.0.0.12:22-10.0.0.1:43284.service - OpenSSH per-connection server daemon (10.0.0.1:43284). Mar 2 13:01:21.927441 sshd[7171]: Accepted publickey for core from 10.0.0.1 port 43284 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:21.931793 sshd[7171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:21.965399 systemd-logind[1440]: New session 37 of user core. Mar 2 13:01:21.981441 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 13:01:22.416224 sshd[7171]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:22.525510 systemd[1]: sshd@36-10.0.0.12:22-10.0.0.1:43284.service: Deactivated successfully. Mar 2 13:01:22.533216 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 13:01:22.536603 systemd-logind[1440]: Session 37 logged out. Waiting for processes to exit. Mar 2 13:01:22.538925 systemd-logind[1440]: Removed session 37. Mar 2 13:01:27.441149 systemd[1]: Started sshd@37-10.0.0.12:22-10.0.0.1:43294.service - OpenSSH per-connection server daemon (10.0.0.1:43294). Mar 2 13:01:27.550663 sshd[7215]: Accepted publickey for core from 10.0.0.1 port 43294 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:27.554746 sshd[7215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:27.564957 systemd-logind[1440]: New session 38 of user core. Mar 2 13:01:27.580374 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 2 13:01:27.911435 sshd[7215]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:27.920815 systemd[1]: sshd@37-10.0.0.12:22-10.0.0.1:43294.service: Deactivated successfully. Mar 2 13:01:27.929454 systemd[1]: session-38.scope: Deactivated successfully. Mar 2 13:01:27.931131 systemd-logind[1440]: Session 38 logged out. Waiting for processes to exit. Mar 2 13:01:27.933961 systemd-logind[1440]: Removed session 38. Mar 2 13:01:32.980837 systemd[1]: Started sshd@38-10.0.0.12:22-10.0.0.1:36602.service - OpenSSH per-connection server daemon (10.0.0.1:36602). Mar 2 13:01:33.136678 sshd[7231]: Accepted publickey for core from 10.0.0.1 port 36602 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:33.143861 sshd[7231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:33.168380 systemd-logind[1440]: New session 39 of user core. Mar 2 13:01:33.206059 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 2 13:01:34.117711 sshd[7231]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:34.156444 systemd[1]: sshd@38-10.0.0.12:22-10.0.0.1:36602.service: Deactivated successfully. Mar 2 13:01:34.163187 systemd[1]: session-39.scope: Deactivated successfully. Mar 2 13:01:34.166404 systemd-logind[1440]: Session 39 logged out. Waiting for processes to exit. Mar 2 13:01:34.176580 systemd-logind[1440]: Removed session 39. Mar 2 13:01:36.626746 kubelet[2603]: E0302 13:01:36.624809 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:38.642144 kubelet[2603]: E0302 13:01:38.628872 2603 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:39.211729 systemd[1]: Started sshd@39-10.0.0.12:22-10.0.0.1:48802.service - OpenSSH per-connection server daemon (10.0.0.1:48802). Mar 2 13:01:40.089346 sshd[7285]: Accepted publickey for core from 10.0.0.1 port 48802 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:40.106216 sshd[7285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:41.384150 systemd-logind[1440]: New session 40 of user core. Mar 2 13:01:41.554881 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 2 13:01:42.030480 kubelet[2603]: E0302 13:01:42.027607 2603 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.392s" Mar 2 13:01:43.844689 sshd[7285]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:43.877528 systemd[1]: sshd@39-10.0.0.12:22-10.0.0.1:48802.service: Deactivated successfully. Mar 2 13:01:43.899077 systemd[1]: session-40.scope: Deactivated successfully. Mar 2 13:01:43.917900 systemd-logind[1440]: Session 40 logged out. Waiting for processes to exit. Mar 2 13:01:43.938782 systemd-logind[1440]: Removed session 40.