Mar 2 13:07:44.099724 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:07:44.099746 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:07:44.099763 kernel: BIOS-provided physical RAM map: Mar 2 13:07:44.099773 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 13:07:44.099783 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 13:07:44.099793 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 13:07:44.099802 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 13:07:44.099810 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 13:07:44.099820 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 2 13:07:44.099831 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 2 13:07:44.099844 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 2 13:07:44.099854 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 2 13:07:44.099864 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 2 13:07:44.099873 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 2 13:07:44.099884 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 2 13:07:44.099894 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 13:07:44.099907 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 2 13:07:44.099917 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 2 13:07:44.099926 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 13:07:44.099935 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:07:44.099945 kernel: NX (Execute Disable) protection: active Mar 2 13:07:44.099956 kernel: APIC: Static calls initialized Mar 2 13:07:44.099962 kernel: efi: EFI v2.7 by EDK II Mar 2 13:07:44.099968 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 2 13:07:44.099974 kernel: SMBIOS 2.8 present. Mar 2 13:07:44.099979 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 2 13:07:44.099985 kernel: Hypervisor detected: KVM Mar 2 13:07:44.099993 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:07:44.099999 kernel: kvm-clock: using sched offset of 11235711414 cycles Mar 2 13:07:44.100006 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:07:44.100012 kernel: tsc: Detected 2445.424 MHz processor Mar 2 13:07:44.100018 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:07:44.100025 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:07:44.100031 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 2 13:07:44.100037 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 13:07:44.100043 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:07:44.100052 kernel: Using GB pages for direct mapping Mar 2 13:07:44.100058 kernel: Secure boot disabled Mar 2 13:07:44.100064 kernel: ACPI: Early table checksum verification disabled Mar 2 13:07:44.100070 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 13:07:44.100079 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 13:07:44.100086 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:07:44.100092 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:07:44.100101 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 13:07:44.100107 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:07:44.100113 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:07:44.100120 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:07:44.100126 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:07:44.100132 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 13:07:44.100138 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 13:07:44.100147 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 13:07:44.100154 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 13:07:44.100208 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 13:07:44.100216 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 13:07:44.100223 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 13:07:44.100229 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 13:07:44.100235 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 13:07:44.100241 kernel: No NUMA configuration found Mar 2 13:07:44.100247 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 2 13:07:44.100257 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 2 13:07:44.100264 kernel: Zone ranges: Mar 2 13:07:44.100270 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:07:44.100276 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 2 13:07:44.100282 kernel: Normal empty Mar 2 13:07:44.100289 kernel: Movable zone start for each node Mar 2 13:07:44.100295 kernel: Early memory node ranges Mar 2 13:07:44.100301 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 13:07:44.100307 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 13:07:44.100313 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 13:07:44.100322 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 2 13:07:44.100328 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 2 13:07:44.100334 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 2 13:07:44.100341 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 2 13:07:44.100347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:07:44.100353 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 13:07:44.100359 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 13:07:44.100365 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:07:44.100372 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 2 13:07:44.100380 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 2 13:07:44.100386 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 2 13:07:44.100392 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:07:44.100399 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:07:44.100405 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:07:44.100411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:07:44.100417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:07:44.100423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:07:44.100430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:07:44.100438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:07:44.100445 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:07:44.100451 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:07:44.100457 kernel: TSC deadline timer available Mar 2 13:07:44.100464 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:07:44.100470 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:07:44.100476 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:07:44.100482 kernel: kvm-guest: setup PV sched yield Mar 2 13:07:44.100489 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 2 13:07:44.100495 kernel: Booting paravirtualized kernel on KVM Mar 2 13:07:44.100504 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:07:44.100510 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:07:44.100517 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:07:44.100523 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:07:44.100529 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:07:44.100535 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:07:44.100541 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:07:44.100549 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:07:44.100557 kernel: random: crng init done Mar 2 13:07:44.100591 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:07:44.100598 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:07:44.100605 kernel: Fallback order for Node 0: 0 Mar 2 13:07:44.100611 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 2 13:07:44.100617 kernel: Policy zone: DMA32 Mar 2 13:07:44.100623 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:07:44.100630 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 2 13:07:44.100636 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:07:44.100645 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:07:44.100651 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:07:44.100658 kernel: Dynamic Preempt: voluntary Mar 2 13:07:44.100664 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:07:44.100679 kernel: rcu: RCU event tracing is enabled. Mar 2 13:07:44.100688 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:07:44.100695 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:07:44.100701 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:07:44.100708 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:07:44.100714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:07:44.100721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:07:44.100730 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:07:44.100737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:07:44.100743 kernel: Console: colour dummy device 80x25 Mar 2 13:07:44.100749 kernel: printk: console [ttyS0] enabled Mar 2 13:07:44.100761 kernel: ACPI: Core revision 20230628 Mar 2 13:07:44.100777 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:07:44.100788 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:07:44.100798 kernel: x2apic enabled Mar 2 13:07:44.100809 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:07:44.100822 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:07:44.100832 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:07:44.100844 kernel: kvm-guest: setup PV IPIs Mar 2 13:07:44.100855 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:07:44.100866 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:07:44.100880 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 2 13:07:44.100892 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:07:44.100904 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:07:44.100910 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:07:44.100917 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:07:44.100924 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:07:44.100931 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:07:44.100937 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:07:44.100944 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:07:44.100953 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:07:44.100960 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:07:44.100967 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:07:44.100974 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:07:44.100980 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:07:44.100987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:07:44.100994 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:07:44.101000 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:07:44.101007 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:07:44.101016 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:07:44.101022 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:07:44.101029 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:07:44.101035 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:07:44.101042 kernel: landlock: Up and running. Mar 2 13:07:44.101048 kernel: SELinux: Initializing. Mar 2 13:07:44.101055 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:07:44.101062 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:07:44.101068 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:07:44.101077 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:07:44.101084 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:07:44.101091 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:07:44.101098 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:07:44.101104 kernel: signal: max sigframe size: 1776 Mar 2 13:07:44.101111 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:07:44.101117 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:07:44.101124 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:07:44.101133 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:07:44.101139 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:07:44.101146 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:07:44.101152 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:07:44.101277 kernel: smpboot: Max logical packages: 1 Mar 2 13:07:44.101287 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 2 13:07:44.101293 kernel: devtmpfs: initialized Mar 2 13:07:44.101300 kernel: x86/mm: Memory block size: 128MB Mar 2 13:07:44.101306 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 13:07:44.101313 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 13:07:44.101323 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 2 13:07:44.101330 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 13:07:44.101337 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 13:07:44.101343 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:07:44.101350 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:07:44.101357 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:07:44.101363 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:07:44.101370 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:07:44.101379 kernel: audit: type=2000 audit(1772456862.387:1): state=initialized audit_enabled=0 res=1 Mar 2 13:07:44.101386 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:07:44.101392 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:07:44.101399 kernel: cpuidle: using governor menu Mar 2 13:07:44.101405 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:07:44.101412 kernel: dca service started, version 1.12.1 Mar 2 13:07:44.101419 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:07:44.101426 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:07:44.101432 kernel: PCI: Using configuration type 1 for base access Mar 2 13:07:44.101441 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:07:44.101447 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:07:44.101454 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:07:44.101461 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:07:44.101509 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:07:44.101517 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:07:44.101523 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:07:44.101530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:07:44.101536 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:07:44.101546 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:07:44.101552 kernel: ACPI: Interpreter enabled Mar 2 13:07:44.101558 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:07:44.101593 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:07:44.101601 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:07:44.101607 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:07:44.101614 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:07:44.101620 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:07:44.102008 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:07:44.102157 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:07:44.102406 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:07:44.102424 kernel: PCI host bridge to bus 0000:00 Mar 2 13:07:44.102718 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:07:44.102892 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:07:44.103024 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:07:44.103145 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:07:44.103357 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:07:44.103603 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 2 13:07:44.103740 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:07:44.104006 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:07:44.104253 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:07:44.104385 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 2 13:07:44.104513 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 2 13:07:44.104687 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 2 13:07:44.104844 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 2 13:07:44.104984 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:07:44.105116 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:07:44.105299 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 2 13:07:44.105430 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 2 13:07:44.105549 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 2 13:07:44.105776 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:07:44.105948 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 2 13:07:44.106074 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 2 13:07:44.106264 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 2 13:07:44.106508 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:07:44.106679 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 2 13:07:44.106833 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 2 13:07:44.106989 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 2 13:07:44.107113 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 2 13:07:44.107348 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:07:44.107477 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:07:44.107701 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:07:44.107833 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 2 13:07:44.107952 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 2 13:07:44.108129 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:07:44.108301 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 2 13:07:44.108312 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:07:44.108319 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:07:44.108326 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:07:44.108337 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:07:44.108344 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:07:44.108350 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:07:44.108357 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:07:44.108363 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:07:44.108370 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:07:44.108377 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:07:44.108383 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:07:44.108389 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:07:44.108399 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:07:44.108405 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:07:44.108412 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:07:44.108419 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:07:44.108425 kernel: iommu: Default domain type: Translated Mar 2 13:07:44.108432 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:07:44.108439 kernel: efivars: Registered efivars operations Mar 2 13:07:44.108445 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:07:44.108452 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:07:44.108459 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 13:07:44.108468 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 2 13:07:44.108474 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 2 13:07:44.108481 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 2 13:07:44.108639 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:07:44.108762 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:07:44.108883 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:07:44.108892 kernel: vgaarb: loaded Mar 2 13:07:44.108899 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:07:44.108909 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:07:44.108916 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:07:44.108923 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:07:44.108929 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:07:44.108936 kernel: pnp: PnP ACPI init Mar 2 13:07:44.109131 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:07:44.109142 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:07:44.109150 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:07:44.109200 kernel: NET: Registered PF_INET protocol family Mar 2 13:07:44.109208 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:07:44.109215 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:07:44.109221 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:07:44.109228 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:07:44.109234 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:07:44.109241 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:07:44.109248 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:07:44.109254 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:07:44.109264 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:07:44.109271 kernel: NET: Registered PF_XDP protocol family Mar 2 13:07:44.109401 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 2 13:07:44.109522 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 2 13:07:44.109674 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:07:44.109786 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:07:44.109895 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:07:44.110004 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:07:44.110118 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:07:44.110305 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 2 13:07:44.110317 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:07:44.110324 kernel: Initialise system trusted keyrings Mar 2 13:07:44.110331 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:07:44.110338 kernel: Key type asymmetric registered Mar 2 13:07:44.110344 kernel: Asymmetric key parser 'x509' registered Mar 2 13:07:44.110351 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:07:44.110357 kernel: io scheduler mq-deadline registered Mar 2 13:07:44.110368 kernel: io scheduler kyber registered Mar 2 13:07:44.110375 kernel: io scheduler bfq registered Mar 2 13:07:44.110381 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:07:44.110388 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:07:44.110395 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:07:44.110402 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:07:44.110408 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:07:44.110415 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:07:44.110421 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:07:44.110431 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:07:44.110437 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:07:44.110657 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:07:44.110670 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:07:44.110784 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:07:44.110898 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:07:43 UTC (1772456863) Mar 2 13:07:44.111012 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:07:44.111020 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:07:44.111031 kernel: efifb: probing for efifb Mar 2 13:07:44.111038 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 2 13:07:44.111045 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 2 13:07:44.111052 kernel: efifb: scrolling: redraw Mar 2 13:07:44.111058 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 2 13:07:44.111065 kernel: Console: switching to colour frame buffer device 100x37 Mar 2 13:07:44.111071 kernel: fb0: EFI VGA frame buffer device Mar 2 13:07:44.111078 kernel: pstore: Using crash dump compression: deflate Mar 2 13:07:44.111084 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 13:07:44.111093 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:07:44.111100 kernel: Segment Routing with IPv6 Mar 2 13:07:44.111107 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:07:44.111113 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:07:44.111120 kernel: Key type dns_resolver registered Mar 2 13:07:44.111127 kernel: IPI shorthand broadcast: enabled Mar 2 13:07:44.111152 kernel: sched_clock: Marking stable (1772024940, 468581701)->(2532619732, -292013091) Mar 2 13:07:44.111258 kernel: registered taskstats version 1 Mar 2 13:07:44.111266 kernel: Loading compiled-in X.509 certificates Mar 2 13:07:44.111277 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:07:44.111284 kernel: Key type .fscrypt registered Mar 2 13:07:44.111291 kernel: Key type fscrypt-provisioning registered Mar 2 13:07:44.111298 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:07:44.111305 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:07:44.111314 kernel: ima: No architecture policies found Mar 2 13:07:44.111320 kernel: clk: Disabling unused clocks Mar 2 13:07:44.111327 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:07:44.111334 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:07:44.111343 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:07:44.111350 kernel: Run /init as init process Mar 2 13:07:44.111357 kernel: with arguments: Mar 2 13:07:44.111364 kernel: /init Mar 2 13:07:44.111371 kernel: with environment: Mar 2 13:07:44.111377 kernel: HOME=/ Mar 2 13:07:44.111384 kernel: TERM=linux Mar 2 13:07:44.111393 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:07:44.111404 systemd[1]: Detected virtualization kvm. Mar 2 13:07:44.111412 systemd[1]: Detected architecture x86-64. Mar 2 13:07:44.111419 systemd[1]: Running in initrd. Mar 2 13:07:44.111426 systemd[1]: No hostname configured, using default hostname. Mar 2 13:07:44.111433 systemd[1]: Hostname set to . Mar 2 13:07:44.111440 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:07:44.111447 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:07:44.111457 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:07:44.111464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:07:44.111472 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:07:44.111479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:07:44.111486 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:07:44.111498 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:07:44.111507 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:07:44.111514 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:07:44.111521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:07:44.111529 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:07:44.111536 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:07:44.111543 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:07:44.111553 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:07:44.111560 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:07:44.111598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:07:44.111605 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:07:44.111613 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:07:44.111620 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:07:44.111628 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:07:44.111635 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:07:44.111665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:07:44.111676 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:07:44.111683 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:07:44.111691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:07:44.111698 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:07:44.111705 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:07:44.111713 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:07:44.111720 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:07:44.111727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:07:44.111737 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:07:44.111744 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:07:44.111774 systemd-journald[192]: Collecting audit messages is disabled. Mar 2 13:07:44.111792 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:07:44.111804 systemd-journald[192]: Journal started Mar 2 13:07:44.111819 systemd-journald[192]: Runtime Journal (/run/log/journal/f8aff2cabdd44167a4bf207765e03ec2) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:07:44.117114 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:07:44.123043 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:07:44.124925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:07:44.125877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:07:44.131352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:07:44.146306 systemd-modules-load[193]: Inserted module 'overlay' Mar 2 13:07:44.146765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:07:44.157776 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:07:44.164930 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:07:44.168660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:07:44.187226 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:07:44.190536 systemd-modules-load[193]: Inserted module 'br_netfilter' Mar 2 13:07:44.193117 kernel: Bridge firewalling registered Mar 2 13:07:44.202550 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:07:44.203970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:07:44.222739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:07:44.224893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:07:44.240867 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:07:44.254373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:07:44.262463 systemd-resolved[226]: Positive Trust Anchors: Mar 2 13:07:44.262502 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:07:44.262531 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:07:44.265038 systemd-resolved[226]: Defaulting to hostname 'linux'. Mar 2 13:07:44.266293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:07:44.269036 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:07:44.312450 dracut-cmdline[232]: dracut-dracut-053 Mar 2 13:07:44.316531 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:07:44.424276 kernel: SCSI subsystem initialized Mar 2 13:07:44.438266 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:07:44.455292 kernel: iscsi: registered transport (tcp) Mar 2 13:07:44.486770 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:07:44.486839 kernel: QLogic iSCSI HBA Driver Mar 2 13:07:44.546466 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:07:44.563493 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:07:44.592957 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:07:44.593008 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:07:44.595983 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:07:44.640274 kernel: raid6: avx2x4 gen() 30947 MB/s Mar 2 13:07:44.658271 kernel: raid6: avx2x2 gen() 28270 MB/s Mar 2 13:07:44.677601 kernel: raid6: avx2x1 gen() 23619 MB/s Mar 2 13:07:44.677635 kernel: raid6: using algorithm avx2x4 gen() 30947 MB/s Mar 2 13:07:44.697667 kernel: raid6: .... xor() 4692 MB/s, rmw enabled Mar 2 13:07:44.697706 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:07:44.722278 kernel: xor: automatically using best checksumming function avx Mar 2 13:07:44.885245 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:07:44.902288 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:07:44.916544 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:07:44.931830 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 2 13:07:44.936960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:07:44.949376 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:07:44.964152 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Mar 2 13:07:45.006468 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:07:45.022505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:07:45.114033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:07:45.134506 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:07:45.146972 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:07:45.147924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:07:45.164109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:07:45.168996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:07:45.190311 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:07:45.197396 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:07:45.191375 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:07:45.222233 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:07:45.224229 kernel: libata version 3.00 loaded. Mar 2 13:07:45.225341 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:07:45.246079 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:07:45.246262 kernel: GPT:9289727 != 19775487 Mar 2 13:07:45.246308 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:07:45.246383 kernel: GPT:9289727 != 19775487 Mar 2 13:07:45.246420 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:07:45.246438 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:07:45.249122 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:07:45.249344 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:07:45.269345 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:07:45.294897 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (469) Mar 2 13:07:45.294920 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 2 13:07:45.294931 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:07:45.287515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:07:45.311973 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:07:45.312301 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:07:45.312317 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:07:45.312476 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:07:45.312667 kernel: AES CTR mode by8 optimization enabled Mar 2 13:07:45.287860 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:07:45.294781 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:07:45.320244 kernel: scsi host0: ahci Mar 2 13:07:45.322714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:07:45.353354 kernel: scsi host1: ahci Mar 2 13:07:45.353677 kernel: scsi host2: ahci Mar 2 13:07:45.353945 kernel: scsi host3: ahci Mar 2 13:07:45.354130 kernel: scsi host4: ahci Mar 2 13:07:45.355323 kernel: scsi host5: ahci Mar 2 13:07:45.355617 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 2 13:07:45.355631 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 2 13:07:45.355641 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 2 13:07:45.355656 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 2 13:07:45.355665 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 2 13:07:45.355674 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 2 13:07:45.372780 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:07:45.388558 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:07:45.410607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:07:45.422481 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:07:45.432264 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:07:45.459409 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:07:45.468500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:07:45.472809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:07:45.484504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:07:45.484543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:07:45.484559 disk-uuid[551]: Primary Header is updated. Mar 2 13:07:45.484559 disk-uuid[551]: Secondary Entries is updated. Mar 2 13:07:45.484559 disk-uuid[551]: Secondary Header is updated. Mar 2 13:07:45.484703 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:07:45.522118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:07:45.557685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:07:45.631367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:07:45.648987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:07:45.667262 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:07:45.667311 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:07:45.671197 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:07:45.675279 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:07:45.679292 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:07:45.683810 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:07:45.683858 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:07:45.683879 kernel: ata3.00: applying bridge limits Mar 2 13:07:45.687158 kernel: ata3.00: configured for UDMA/100 Mar 2 13:07:45.692286 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:07:45.752254 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:07:45.752646 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:07:45.765289 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:07:46.490220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:07:46.490922 disk-uuid[552]: The operation has completed successfully. Mar 2 13:07:46.528133 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:07:46.528366 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:07:46.556455 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:07:46.565456 sh[593]: Success Mar 2 13:07:46.583299 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:07:46.635940 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:07:46.659760 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:07:46.666483 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:07:46.685707 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:07:46.685758 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:07:46.685776 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:07:46.689103 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:07:46.691665 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:07:46.704353 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:07:46.710846 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:07:46.725368 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:07:46.732405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:07:46.748896 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:07:46.748936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:07:46.748955 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:07:46.753455 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:07:46.765893 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:07:46.772077 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:07:46.779514 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:07:46.795429 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:07:46.871712 ignition[676]: Ignition 2.19.0 Mar 2 13:07:46.871755 ignition[676]: Stage: fetch-offline Mar 2 13:07:46.871814 ignition[676]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:07:46.871829 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:07:46.871950 ignition[676]: parsed url from cmdline: "" Mar 2 13:07:46.871956 ignition[676]: no config URL provided Mar 2 13:07:46.871966 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:07:46.871979 ignition[676]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:07:46.872017 ignition[676]: op(1): [started] loading QEMU firmware config module Mar 2 13:07:46.872025 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:07:46.910748 ignition[676]: op(1): [finished] loading QEMU firmware config module Mar 2 13:07:46.937884 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:07:46.964543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:07:47.000963 systemd-networkd[781]: lo: Link UP Mar 2 13:07:47.000999 systemd-networkd[781]: lo: Gained carrier Mar 2 13:07:47.007023 systemd-networkd[781]: Enumeration completed Mar 2 13:07:47.009540 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:07:47.014122 systemd[1]: Reached target network.target - Network. Mar 2 13:07:47.023144 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:07:47.023226 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:07:47.036110 systemd-networkd[781]: eth0: Link UP Mar 2 13:07:47.036121 systemd-networkd[781]: eth0: Gained carrier Mar 2 13:07:47.036131 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:07:47.064267 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:07:47.172135 ignition[676]: parsing config with SHA512: 975121bfad78966e47b0c85298b4b4c88334ad5ebf1449b9f42c265f7c75da06c63fa490e09eec6e4f9134cf4e076f9d317853ab6c6580b04be6c5d42e109a64 Mar 2 13:07:47.180460 unknown[676]: fetched base config from "system" Mar 2 13:07:47.181253 unknown[676]: fetched user config from "qemu" Mar 2 13:07:47.181676 ignition[676]: fetch-offline: fetch-offline passed Mar 2 13:07:47.183776 systemd-resolved[226]: Detected conflict on linux IN A 10.0.0.116 Mar 2 13:07:47.181745 ignition[676]: Ignition finished successfully Mar 2 13:07:47.183788 systemd-resolved[226]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Mar 2 13:07:47.183991 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:07:47.188523 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:07:47.209416 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:07:47.233665 ignition[786]: Ignition 2.19.0 Mar 2 13:07:47.238039 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:07:47.233675 ignition[786]: Stage: kargs Mar 2 13:07:47.233902 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:07:47.251459 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:07:47.233922 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:07:47.235157 ignition[786]: kargs: kargs passed Mar 2 13:07:47.235284 ignition[786]: Ignition finished successfully Mar 2 13:07:47.281124 ignition[794]: Ignition 2.19.0 Mar 2 13:07:47.281158 ignition[794]: Stage: disks Mar 2 13:07:47.281439 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:07:47.284255 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:07:47.281456 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:07:47.289050 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:07:47.282301 ignition[794]: disks: disks passed Mar 2 13:07:47.294473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:07:47.282347 ignition[794]: Ignition finished successfully Mar 2 13:07:47.294641 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:07:47.295122 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:07:47.296086 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:07:47.321448 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:07:47.341715 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:07:47.343050 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:07:47.353327 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:07:47.491284 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:07:47.491425 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:07:47.496108 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:07:47.517341 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:07:47.536715 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 2 13:07:47.536754 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:07:47.536775 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:07:47.522357 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:07:47.556681 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:07:47.556702 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:07:47.544820 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:07:47.544874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:07:47.544907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:07:47.551878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:07:47.557757 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:07:47.570125 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:07:47.625994 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:07:47.631012 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:07:47.636251 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:07:47.642030 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:07:47.767296 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:07:47.778559 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:07:47.783920 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:07:47.800432 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:07:47.792048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:07:47.817410 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:07:47.833942 ignition[925]: INFO : Ignition 2.19.0 Mar 2 13:07:47.833942 ignition[925]: INFO : Stage: mount Mar 2 13:07:47.838224 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:07:47.838224 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:07:47.838224 ignition[925]: INFO : mount: mount passed Mar 2 13:07:47.838224 ignition[925]: INFO : Ignition finished successfully Mar 2 13:07:47.852535 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:07:47.863613 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:07:47.876237 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:07:47.895298 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 2 13:07:47.901551 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:07:47.901626 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:07:47.901639 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:07:47.911241 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:07:47.913787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:07:47.955751 ignition[955]: INFO : Ignition 2.19.0 Mar 2 13:07:47.955751 ignition[955]: INFO : Stage: files Mar 2 13:07:47.961616 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:07:47.961616 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:07:47.961616 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:07:47.971770 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:07:47.971770 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:07:47.982808 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:07:47.982808 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:07:47.982808 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:07:47.982808 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:07:47.982808 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:07:47.977109 unknown[955]: wrote ssh authorized keys file for user: core Mar 2 13:07:48.058222 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:07:48.152811 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:07:48.152811 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:07:48.163920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 2 13:07:48.452036 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:07:48.853143 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:07:48.853143 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 13:07:48.865982 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:07:48.920343 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:07:48.920343 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:07:48.920343 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:07:48.920343 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:07:48.920343 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:07:48.920343 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:07:48.920343 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:07:48.920343 ignition[955]: INFO : files: files passed Mar 2 13:07:48.920343 ignition[955]: INFO : Ignition finished successfully Mar 2 13:07:48.907656 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:07:48.936633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:07:48.946808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:07:48.954641 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:07:48.998632 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:07:48.954805 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:07:49.006440 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:07:49.006440 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:07:48.975733 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:07:49.029029 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:07:48.980355 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:07:49.008458 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:07:49.011872 systemd-networkd[781]: eth0: Gained IPv6LL Mar 2 13:07:49.045341 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:07:49.045517 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:07:49.050309 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:07:49.053926 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:07:49.060551 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:07:49.073473 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:07:49.094536 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:07:49.101113 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:07:49.124843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:07:49.128907 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:07:49.136616 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:07:49.136860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:07:49.137082 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:07:49.138760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:07:49.139450 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:07:49.140058 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:07:49.141149 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:07:49.141727 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:07:49.271994 ignition[1009]: INFO : Ignition 2.19.0 Mar 2 13:07:49.271994 ignition[1009]: INFO : Stage: umount Mar 2 13:07:49.271994 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:07:49.271994 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:07:49.271994 ignition[1009]: INFO : umount: umount passed Mar 2 13:07:49.271994 ignition[1009]: INFO : Ignition finished successfully Mar 2 13:07:49.142356 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:07:49.142883 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:07:49.144127 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:07:49.146226 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:07:49.146847 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:07:49.147872 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:07:49.148012 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:07:49.150280 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:07:49.150767 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:07:49.151336 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:07:49.151704 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:07:49.151808 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:07:49.151931 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:07:49.154053 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:07:49.154226 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:07:49.154670 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:07:49.155029 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:07:49.155343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:07:49.155541 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:07:49.156041 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:07:49.157097 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:07:49.157252 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:07:49.157679 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:07:49.157810 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:07:49.158149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:07:49.158354 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:07:49.158633 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:07:49.158742 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:07:49.241674 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:07:49.248495 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:07:49.255218 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:07:49.255444 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:07:49.261946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:07:49.262108 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:07:49.277026 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:07:49.277274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:07:49.283892 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:07:49.283998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:07:49.292490 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:07:49.302552 systemd[1]: Stopped target network.target - Network. Mar 2 13:07:49.312488 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:07:49.312634 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:07:49.320303 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:07:49.320366 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:07:49.326705 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:07:49.326766 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:07:49.332421 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:07:49.332479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:07:49.339132 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:07:49.346677 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:07:49.360572 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:07:49.360751 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:07:49.365777 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:07:49.365844 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:07:49.366434 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:07:49.366570 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:07:49.367640 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:07:49.367722 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:07:49.378297 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 2 13:07:49.398564 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:07:49.398802 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:07:49.404386 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:07:49.404493 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:07:49.441639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:07:49.446758 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:07:49.446827 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:07:49.455019 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:07:49.455101 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:07:49.463277 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:07:49.463350 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:07:49.480895 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:07:49.607085 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:07:49.607446 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:07:49.621413 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:07:49.621706 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:07:49.640155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:07:49.640324 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:07:49.646460 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:07:49.646522 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:07:49.665755 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:07:49.665826 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:07:49.670850 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:07:49.670921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:07:49.680831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:07:49.680950 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:07:49.744348 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:07:49.751499 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:07:49.751692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:07:49.755969 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:07:49.982981 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 2 13:07:49.756049 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:07:49.761632 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:07:49.761721 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:07:49.762225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:07:49.762320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:07:49.805657 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:07:49.805849 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:07:49.835288 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:07:49.864470 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:07:49.878077 systemd[1]: Switching root. Mar 2 13:07:50.057632 systemd-journald[192]: Journal stopped Mar 2 13:07:58.177781 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:07:58.199879 kernel: SELinux: policy capability open_perms=1 Mar 2 13:07:58.200014 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:07:58.200039 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:07:58.200114 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:07:58.200288 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:07:58.200311 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:07:58.200329 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:07:58.200428 kernel: audit: type=1403 audit(1772456870.199:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:07:58.200533 systemd[1]: Successfully loaded SELinux policy in 86.134ms. Mar 2 13:07:58.200570 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.905ms. Mar 2 13:07:58.200638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:07:58.200667 systemd[1]: Detected virtualization kvm. Mar 2 13:07:58.200720 systemd[1]: Detected architecture x86-64. Mar 2 13:07:58.200739 systemd[1]: Detected first boot. Mar 2 13:07:58.200759 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:07:58.200780 zram_generator::config[1054]: No configuration found. Mar 2 13:07:58.200841 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:07:58.200868 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:07:58.200891 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:07:58.200947 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:07:58.200969 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:07:58.200990 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:07:58.201018 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:07:58.201036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:07:58.201054 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:07:58.201123 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:07:58.201146 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:07:58.201233 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:07:58.201292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:07:58.201306 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:07:58.201317 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:07:58.201327 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:07:58.201347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:07:58.201398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:07:58.201411 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:07:58.201424 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:07:58.201442 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:07:58.201460 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:07:58.201478 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:07:58.201498 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:07:58.201515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:07:58.201572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:07:58.201637 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:07:58.201659 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:07:58.201680 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:07:58.201802 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:07:58.201831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:07:58.201852 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:07:58.201872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:07:58.201892 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:07:58.201912 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:07:58.202000 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:07:58.202022 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:07:58.202040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:07:58.202057 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:07:58.202075 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:07:58.202103 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:07:58.202124 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:07:58.202141 systemd[1]: Reached target machines.target - Containers. Mar 2 13:07:58.202249 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:07:58.202271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:07:58.202290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:07:58.202498 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:07:58.202513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:07:58.202524 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:07:58.202535 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:07:58.202546 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:07:58.202561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:07:58.202719 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:07:58.202735 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:07:58.202746 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:07:58.202757 kernel: fuse: init (API version 7.39) Mar 2 13:07:58.202768 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:07:58.202779 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:07:58.202790 kernel: loop: module loaded Mar 2 13:07:58.202801 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:07:58.202846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:07:58.202868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:07:58.202887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:07:58.202950 systemd-journald[1138]: Collecting audit messages is disabled. Mar 2 13:07:58.202988 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:07:58.203002 systemd-journald[1138]: Journal started Mar 2 13:07:58.203056 systemd-journald[1138]: Runtime Journal (/run/log/journal/f8aff2cabdd44167a4bf207765e03ec2) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:07:51.356572 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:07:51.375649 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:07:51.376494 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:07:51.377003 systemd[1]: systemd-journald.service: Consumed 1.938s CPU time. Mar 2 13:07:58.237268 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:07:58.237381 kernel: ACPI: bus type drm_connector registered Mar 2 13:07:58.237410 systemd[1]: Stopped verity-setup.service. Mar 2 13:07:58.257468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:07:58.262247 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:07:58.287043 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:07:58.290969 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:07:58.296685 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:07:58.300913 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:07:58.304871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:07:58.309269 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:07:58.358051 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:07:58.382884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:07:58.388514 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:07:58.388821 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:07:58.393508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:07:58.393792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:07:58.398710 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:07:58.399035 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:07:58.403143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:07:58.404046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:07:58.440455 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:07:58.441112 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:07:58.459465 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:07:58.459972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:07:58.486005 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:07:58.489897 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:07:58.495410 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:07:58.521064 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:07:58.538384 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:07:58.545271 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:07:58.550531 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:07:58.550651 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:07:58.556142 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:07:58.562738 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:07:58.569484 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:07:58.574465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:07:58.576468 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:07:58.587026 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:07:58.592674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:07:58.595618 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:07:58.601572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:07:58.606894 systemd-journald[1138]: Time spent on flushing to /var/log/journal/f8aff2cabdd44167a4bf207765e03ec2 is 19.957ms for 986 entries. Mar 2 13:07:58.606894 systemd-journald[1138]: System Journal (/var/log/journal/f8aff2cabdd44167a4bf207765e03ec2) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:08:00.140463 systemd-journald[1138]: Received client request to flush runtime journal. Mar 2 13:07:58.616911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:07:58.675884 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:08:00.081726 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:08:00.089737 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:08:00.095688 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:08:00.107127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:08:00.121569 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:08:00.131018 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:08:00.141451 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:08:00.147927 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:08:00.159385 kernel: loop0: detected capacity change from 0 to 142488 Mar 2 13:08:00.158647 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:08:00.171353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:08:00.185482 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:08:00.272515 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Mar 2 13:08:00.272536 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Mar 2 13:08:00.272877 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:08:00.276331 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:08:00.285268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:08:00.315956 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:08:00.320822 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:08:00.332918 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:08:00.365329 kernel: loop1: detected capacity change from 0 to 217752 Mar 2 13:08:00.473572 kernel: loop2: detected capacity change from 0 to 140768 Mar 2 13:08:00.524459 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:08:00.541557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:08:00.580409 kernel: loop3: detected capacity change from 0 to 142488 Mar 2 13:08:00.627238 kernel: loop4: detected capacity change from 0 to 217752 Mar 2 13:08:00.634911 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 2 13:08:00.634972 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 2 13:08:00.666243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:08:00.669389 kernel: loop5: detected capacity change from 0 to 140768 Mar 2 13:08:00.706822 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:08:00.707559 (sd-merge)[1193]: Merged extensions into '/usr'. Mar 2 13:08:00.715417 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:08:00.715437 systemd[1]: Reloading... Mar 2 13:08:01.057258 zram_generator::config[1226]: No configuration found. Mar 2 13:08:01.346714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:08:01.537460 systemd[1]: Reloading finished in 821 ms. Mar 2 13:08:01.574357 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:08:01.574328 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:08:01.580021 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:08:01.675010 systemd[1]: Starting ensure-sysext.service... Mar 2 13:08:01.680230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:08:01.688388 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:08:01.688441 systemd[1]: Reloading... Mar 2 13:08:01.738052 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:08:01.738705 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:08:01.740095 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:08:01.740540 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 2 13:08:01.740701 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 2 13:08:01.745384 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:08:01.745422 systemd-tmpfiles[1259]: Skipping /boot Mar 2 13:08:01.929243 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:08:01.929443 systemd-tmpfiles[1259]: Skipping /boot Mar 2 13:08:02.061734 zram_generator::config[1285]: No configuration found. Mar 2 13:08:02.205666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:08:02.278464 systemd[1]: Reloading finished in 589 ms. Mar 2 13:08:02.333332 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:08:02.360070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:08:02.433663 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:08:02.441872 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:08:02.448322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:08:02.470694 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:08:02.498501 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:08:02.507005 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:08:02.533810 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:08:02.549673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:08:02.549926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:08:02.558405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:08:02.583472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:08:02.593285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:08:02.601551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:08:02.601912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:08:02.603461 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:08:02.623926 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:08:02.632532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:08:02.633567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:08:02.664581 augenrules[1354]: No rules Mar 2 13:08:02.664743 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:08:02.684294 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:08:02.694007 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Mar 2 13:08:02.695241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:08:02.695917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:08:02.707359 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:08:02.707694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:08:02.760265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:08:02.760646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:08:02.773126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:08:02.782695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:08:02.789897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:08:02.805516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:08:02.832054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:08:02.832503 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:08:02.834235 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:08:02.838983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:08:02.843549 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:08:02.848694 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:08:02.857557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:08:02.858373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:08:02.865106 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:08:02.865467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:08:02.871889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:08:02.872149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:08:02.878595 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:08:02.878939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:08:02.887539 systemd[1]: Finished ensure-sysext.service. Mar 2 13:08:02.916381 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1379) Mar 2 13:08:02.924023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:08:02.949024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:08:02.949132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:08:02.956514 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:08:02.961312 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:08:02.962026 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:08:02.984532 systemd-resolved[1335]: Positive Trust Anchors: Mar 2 13:08:02.984554 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:08:02.984646 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:08:02.992663 systemd-resolved[1335]: Defaulting to hostname 'linux'. Mar 2 13:08:03.000737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:08:03.007418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:08:03.403586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:08:03.407669 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:08:03.428420 systemd-networkd[1397]: lo: Link UP Mar 2 13:08:03.428467 systemd-networkd[1397]: lo: Gained carrier Mar 2 13:08:03.429157 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:08:03.433919 systemd-networkd[1397]: Enumeration completed Mar 2 13:08:03.435243 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 13:08:03.437226 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:08:03.437261 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:08:03.440258 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:08:03.440318 systemd-networkd[1397]: eth0: Link UP Mar 2 13:08:03.440323 systemd-networkd[1397]: eth0: Gained carrier Mar 2 13:08:03.440334 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:08:03.483578 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:08:03.655811 kernel: hrtimer: interrupt took 22616012 ns Mar 2 13:08:03.655914 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 13:08:03.657763 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:08:03.661496 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:08:03.673140 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:08:03.673487 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:08:03.674804 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:08:03.676653 systemd[1]: Reached target network.target - Network. Mar 2 13:08:03.691788 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:08:03.697019 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:08:03.698539 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Mar 2 13:08:04.803416 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:08:04.803481 systemd-timesyncd[1398]: Initial clock synchronization to Mon 2026-03-02 13:08:04.803292 UTC. Mar 2 13:08:04.804183 systemd-resolved[1335]: Clock change detected. Flushing caches. Mar 2 13:08:04.817850 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:08:04.818822 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:08:04.853944 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:08:04.876715 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:08:04.897385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:08:04.897802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:08:05.268360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:08:05.388679 kernel: kvm_amd: TSC scaling supported Mar 2 13:08:05.388768 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:08:05.394120 kernel: kvm_amd: Nested Paging enabled Mar 2 13:08:05.394166 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:08:05.395031 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:08:05.475725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:08:05.491702 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:08:05.532259 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:08:05.557252 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:08:05.576554 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:08:05.785817 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:08:05.793082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:08:05.797222 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:08:05.800914 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:08:05.805479 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:08:05.810123 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:08:05.817305 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:08:05.822157 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:08:05.826006 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:08:05.826103 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:08:05.829174 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:08:05.833889 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:08:05.841882 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:08:05.856825 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:08:05.863530 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:08:05.869537 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:08:05.874522 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:08:05.874735 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:08:05.883376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:08:05.883418 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:08:05.885429 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:08:05.889854 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:08:05.891948 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:08:05.899843 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:08:05.908876 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:08:05.915748 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:08:05.918769 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:08:05.923981 jq[1433]: false Mar 2 13:08:05.925408 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:08:05.935413 extend-filesystems[1434]: Found loop3 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found loop4 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found loop5 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found sr0 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda1 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda2 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda3 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found usr Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda4 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda6 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda7 Mar 2 13:08:05.957903 extend-filesystems[1434]: Found vda9 Mar 2 13:08:05.957903 extend-filesystems[1434]: Checking size of /dev/vda9 Mar 2 13:08:05.957903 extend-filesystems[1434]: Resized partition /dev/vda9 Mar 2 13:08:06.081383 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:08:06.081425 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1379) Mar 2 13:08:06.081441 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:08:05.938876 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:08:05.987543 dbus-daemon[1432]: [system] SELinux support is enabled Mar 2 13:08:06.091245 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:08:06.091245 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:08:06.091245 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:08:06.091245 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:08:05.952888 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:08:06.372477 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Mar 2 13:08:05.983915 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:08:05.991775 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:08:06.000997 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:08:06.005842 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:08:06.378772 jq[1453]: true Mar 2 13:08:06.009782 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:08:06.379015 update_engine[1452]: I20260302 13:08:06.070667 1452 main.cc:92] Flatcar Update Engine starting Mar 2 13:08:06.379015 update_engine[1452]: I20260302 13:08:06.073683 1452 update_check_scheduler.cc:74] Next update check in 2m10s Mar 2 13:08:06.025300 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:08:06.046274 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:08:06.058331 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:08:06.061111 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:08:06.061743 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:08:06.380374 jq[1460]: true Mar 2 13:08:06.061962 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:08:06.068030 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:08:06.068277 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:08:06.076929 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:08:06.077167 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:08:06.310978 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 2 13:08:06.364240 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:08:06.366916 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:08:06.399500 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:08:06.400791 tar[1458]: linux-amd64/LICENSE Mar 2 13:08:06.401140 tar[1458]: linux-amd64/helm Mar 2 13:08:06.405113 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:08:06.427931 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:08:06.443752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:06.446381 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:08:06.446403 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:08:06.449251 systemd-logind[1450]: New seat seat0. Mar 2 13:08:06.458823 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:08:06.461570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:08:06.461663 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:08:06.466752 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:08:06.466775 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:08:06.478831 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:08:06.484356 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:08:06.871030 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:08:06.915262 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:08:06.915592 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:08:06.993431 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:08:07.106084 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:08:07.109667 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:08:07.127451 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:08:07.175530 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:08:07.178827 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:08:07.268303 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:08:07.307577 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:08:07.342549 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:08:07.342866 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:08:07.371189 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:08:07.575739 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:08:07.588837 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:08:07.603981 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:08:07.608143 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:08:08.380366 containerd[1461]: time="2026-03-02T13:08:08.380086953Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:08:08.605199 containerd[1461]: time="2026-03-02T13:08:08.605024958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:08:08.612427 containerd[1461]: time="2026-03-02T13:08:08.612357029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:08:08.612427 containerd[1461]: time="2026-03-02T13:08:08.612403978Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:08:08.612427 containerd[1461]: time="2026-03-02T13:08:08.612420398Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:08:08.612874 containerd[1461]: time="2026-03-02T13:08:08.612820786Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:08:08.612937 containerd[1461]: time="2026-03-02T13:08:08.612878704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:08:08.613090 containerd[1461]: time="2026-03-02T13:08:08.613014438Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:08:08.613135 containerd[1461]: time="2026-03-02T13:08:08.613112600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:08:08.613950 containerd[1461]: time="2026-03-02T13:08:08.613884502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:08:08.613950 containerd[1461]: time="2026-03-02T13:08:08.613931781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:08:08.613950 containerd[1461]: time="2026-03-02T13:08:08.613945967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:08:08.613950 containerd[1461]: time="2026-03-02T13:08:08.613955465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:08:08.614293 containerd[1461]: time="2026-03-02T13:08:08.614223405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:08:08.617189 containerd[1461]: time="2026-03-02T13:08:08.615881431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:08:08.617189 containerd[1461]: time="2026-03-02T13:08:08.616022455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:08:08.617189 containerd[1461]: time="2026-03-02T13:08:08.616035479Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:08:08.617915 containerd[1461]: time="2026-03-02T13:08:08.617281836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:08:08.618413 containerd[1461]: time="2026-03-02T13:08:08.618130691Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:08:08.630336 containerd[1461]: time="2026-03-02T13:08:08.630311797Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:08:08.630741 containerd[1461]: time="2026-03-02T13:08:08.630658876Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:08:08.630851 containerd[1461]: time="2026-03-02T13:08:08.630834363Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:08:08.630909 containerd[1461]: time="2026-03-02T13:08:08.630896890Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:08:08.630986 containerd[1461]: time="2026-03-02T13:08:08.630972662Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:08:08.631468 containerd[1461]: time="2026-03-02T13:08:08.631450294Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:08:08.631973 containerd[1461]: time="2026-03-02T13:08:08.631954385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:08:08.632324 containerd[1461]: time="2026-03-02T13:08:08.632306803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:08:08.632380 containerd[1461]: time="2026-03-02T13:08:08.632368128Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:08:08.632424 containerd[1461]: time="2026-03-02T13:08:08.632412651Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:08:08.632467 containerd[1461]: time="2026-03-02T13:08:08.632456112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.632554 containerd[1461]: time="2026-03-02T13:08:08.632540730Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.632706 containerd[1461]: time="2026-03-02T13:08:08.632688425Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.632803 containerd[1461]: time="2026-03-02T13:08:08.632786479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.632881 containerd[1461]: time="2026-03-02T13:08:08.632867540Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.632941 containerd[1461]: time="2026-03-02T13:08:08.632929416Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.632984 containerd[1461]: time="2026-03-02T13:08:08.632973067Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.633103 containerd[1461]: time="2026-03-02T13:08:08.633087351Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:08:08.633261 containerd[1461]: time="2026-03-02T13:08:08.633243792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633315 containerd[1461]: time="2026-03-02T13:08:08.633304085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633369 containerd[1461]: time="2026-03-02T13:08:08.633356924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633412 containerd[1461]: time="2026-03-02T13:08:08.633401807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633470 containerd[1461]: time="2026-03-02T13:08:08.633457121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633567 containerd[1461]: time="2026-03-02T13:08:08.633549954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633692 containerd[1461]: time="2026-03-02T13:08:08.633678545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633741 containerd[1461]: time="2026-03-02T13:08:08.633730652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633784 containerd[1461]: time="2026-03-02T13:08:08.633773843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633868 containerd[1461]: time="2026-03-02T13:08:08.633854513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633930 containerd[1461]: time="2026-03-02T13:08:08.633917621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.633972 containerd[1461]: time="2026-03-02T13:08:08.633962274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.634080 containerd[1461]: time="2026-03-02T13:08:08.634030942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.634255 containerd[1461]: time="2026-03-02T13:08:08.634239592Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:08:08.634401 containerd[1461]: time="2026-03-02T13:08:08.634386216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.634448 containerd[1461]: time="2026-03-02T13:08:08.634438033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634504306Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634721522Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634741970Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634752049Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634762338Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634771044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634845022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634874648Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:08:08.636654 containerd[1461]: time="2026-03-02T13:08:08.634885258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:08:08.638585 containerd[1461]: time="2026-03-02T13:08:08.635849950Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:08:08.638585 containerd[1461]: time="2026-03-02T13:08:08.635950968Z" level=info msg="Connect containerd service" Mar 2 13:08:08.638585 containerd[1461]: time="2026-03-02T13:08:08.636078075Z" level=info msg="using legacy CRI server" Mar 2 13:08:08.638585 containerd[1461]: time="2026-03-02T13:08:08.636088504Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:08:08.639424 containerd[1461]: time="2026-03-02T13:08:08.639405038Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:08:08.641748 containerd[1461]: time="2026-03-02T13:08:08.641678107Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:08:08.642254 containerd[1461]: time="2026-03-02T13:08:08.641969712Z" level=info msg="Start subscribing containerd event" Mar 2 13:08:08.646381 containerd[1461]: time="2026-03-02T13:08:08.642580542Z" level=info msg="Start recovering state" Mar 2 13:08:08.646381 containerd[1461]: time="2026-03-02T13:08:08.642743662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:08:08.646381 containerd[1461]: time="2026-03-02T13:08:08.642844470Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:08:08.646381 containerd[1461]: time="2026-03-02T13:08:08.642871936Z" level=info msg="Start event monitor" Mar 2 13:08:08.646381 containerd[1461]: time="2026-03-02T13:08:08.642948630Z" level=info msg="Start snapshots syncer" Mar 2 13:08:08.646381 containerd[1461]: time="2026-03-02T13:08:08.642995256Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:08:08.646381 containerd[1461]: time="2026-03-02T13:08:08.643019192Z" level=info msg="Start streaming server" Mar 2 13:08:08.643319 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:08:08.646909 containerd[1461]: time="2026-03-02T13:08:08.646890260Z" level=info msg="containerd successfully booted in 0.271494s" Mar 2 13:08:08.764249 tar[1458]: linux-amd64/README.md Mar 2 13:08:08.786430 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:08:11.113928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:11.122364 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:08:11.123345 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:08:11.139942 systemd[1]: Startup finished in 1.924s (kernel) + 6.390s (initrd) + 19.921s (userspace) = 28.237s. Mar 2 13:08:11.904002 kubelet[1544]: E0302 13:08:11.903871 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:08:11.907676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:08:11.907889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:08:11.908360 systemd[1]: kubelet.service: Consumed 4.522s CPU time. Mar 2 13:08:15.379132 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:08:15.381683 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:49518.service - OpenSSH per-connection server daemon (10.0.0.1:49518). Mar 2 13:08:15.466425 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 49518 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:08:15.469670 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:15.485343 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:08:15.499982 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:08:15.503477 systemd-logind[1450]: New session 1 of user core. Mar 2 13:08:15.548123 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:08:15.563960 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:08:15.568722 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:08:15.714519 systemd[1561]: Queued start job for default target default.target. Mar 2 13:08:15.727486 systemd[1561]: Created slice app.slice - User Application Slice. Mar 2 13:08:15.727568 systemd[1561]: Reached target paths.target - Paths. Mar 2 13:08:15.727592 systemd[1561]: Reached target timers.target - Timers. Mar 2 13:08:15.729582 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:08:15.747887 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:08:15.748054 systemd[1561]: Reached target sockets.target - Sockets. Mar 2 13:08:15.748133 systemd[1561]: Reached target basic.target - Basic System. Mar 2 13:08:15.748174 systemd[1561]: Reached target default.target - Main User Target. Mar 2 13:08:15.748216 systemd[1561]: Startup finished in 169ms. Mar 2 13:08:15.748827 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:08:15.761870 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:08:15.836583 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:49532.service - OpenSSH per-connection server daemon (10.0.0.1:49532). Mar 2 13:08:15.899521 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 49532 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:08:15.901758 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:15.908846 systemd-logind[1450]: New session 2 of user core. Mar 2 13:08:15.922163 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:08:16.002349 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:16.019370 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:49532.service: Deactivated successfully. Mar 2 13:08:16.025857 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:08:16.028436 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:08:16.036054 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:49536.service - OpenSSH per-connection server daemon (10.0.0.1:49536). Mar 2 13:08:16.037444 systemd-logind[1450]: Removed session 2. Mar 2 13:08:16.084685 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 49536 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:08:16.088209 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:16.098253 systemd-logind[1450]: New session 3 of user core. Mar 2 13:08:16.123503 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:08:16.197792 sshd[1579]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:16.229111 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:49536.service: Deactivated successfully. Mar 2 13:08:16.236782 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:08:16.239400 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:08:16.256841 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:49538.service - OpenSSH per-connection server daemon (10.0.0.1:49538). Mar 2 13:08:16.260940 systemd-logind[1450]: Removed session 3. Mar 2 13:08:16.329818 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 49538 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:08:16.333125 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:16.341542 systemd-logind[1450]: New session 4 of user core. Mar 2 13:08:16.348824 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:08:16.448521 sshd[1586]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:16.463434 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:49538.service: Deactivated successfully. Mar 2 13:08:16.465323 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:08:16.466486 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:08:16.480548 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:49548.service - OpenSSH per-connection server daemon (10.0.0.1:49548). Mar 2 13:08:16.482820 systemd-logind[1450]: Removed session 4. Mar 2 13:08:16.547698 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 49548 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:08:16.549972 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:16.558295 systemd-logind[1450]: New session 5 of user core. Mar 2 13:08:16.566530 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:08:16.656833 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:08:16.657270 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:08:16.676040 sudo[1596]: pam_unix(sudo:session): session closed for user root Mar 2 13:08:16.678773 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:16.696003 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:49548.service: Deactivated successfully. Mar 2 13:08:16.698202 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:08:16.704368 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:08:16.713215 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:49564.service - OpenSSH per-connection server daemon (10.0.0.1:49564). Mar 2 13:08:16.714833 systemd-logind[1450]: Removed session 5. Mar 2 13:08:16.763988 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 49564 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:08:16.767166 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:16.780560 systemd-logind[1450]: New session 6 of user core. Mar 2 13:08:16.789911 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:08:16.863490 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:08:16.863973 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:08:16.871860 sudo[1605]: pam_unix(sudo:session): session closed for user root Mar 2 13:08:16.880288 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:08:16.880843 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:08:16.907976 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:08:16.911461 auditctl[1608]: No rules Mar 2 13:08:16.912120 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:08:16.912448 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:08:16.918543 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:08:17.097860 augenrules[1626]: No rules Mar 2 13:08:17.099811 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:08:17.103740 sudo[1604]: pam_unix(sudo:session): session closed for user root Mar 2 13:08:17.107153 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:17.126843 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:49564.service: Deactivated successfully. Mar 2 13:08:17.130049 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:08:17.132273 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:08:17.144014 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:49576.service - OpenSSH per-connection server daemon (10.0.0.1:49576). Mar 2 13:08:17.145273 systemd-logind[1450]: Removed session 6. Mar 2 13:08:17.184927 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 49576 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:08:17.187437 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:17.193455 systemd-logind[1450]: New session 7 of user core. Mar 2 13:08:17.199811 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:08:17.290143 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:08:17.290570 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:08:19.288897 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:08:19.289235 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:08:19.746997 dockerd[1657]: time="2026-03-02T13:08:19.746727889Z" level=info msg="Starting up" Mar 2 13:08:19.873435 dockerd[1657]: time="2026-03-02T13:08:19.873328719Z" level=info msg="Loading containers: start." Mar 2 13:08:20.025751 kernel: Initializing XFRM netlink socket Mar 2 13:08:20.123539 systemd-networkd[1397]: docker0: Link UP Mar 2 13:08:20.144814 dockerd[1657]: time="2026-03-02T13:08:20.144746057Z" level=info msg="Loading containers: done." Mar 2 13:08:20.162861 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2782343194-merged.mount: Deactivated successfully. Mar 2 13:08:20.166303 dockerd[1657]: time="2026-03-02T13:08:20.166237476Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:08:20.166385 dockerd[1657]: time="2026-03-02T13:08:20.166362479Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:08:20.166568 dockerd[1657]: time="2026-03-02T13:08:20.166469218Z" level=info msg="Daemon has completed initialization" Mar 2 13:08:20.216290 dockerd[1657]: time="2026-03-02T13:08:20.216184008Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:08:20.216368 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:08:20.707106 containerd[1461]: time="2026-03-02T13:08:20.706947407Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 2 13:08:21.261230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311831429.mount: Deactivated successfully. Mar 2 13:08:22.000451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:08:22.007046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:23.175477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:23.181160 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:08:23.427874 kubelet[1873]: E0302 13:08:23.426903 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:08:23.434410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:08:23.434803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:08:23.435385 systemd[1]: kubelet.service: Consumed 1.403s CPU time. Mar 2 13:08:23.502981 containerd[1461]: time="2026-03-02T13:08:23.502877687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:23.503681 containerd[1461]: time="2026-03-02T13:08:23.503550356Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 2 13:08:23.504930 containerd[1461]: time="2026-03-02T13:08:23.504797703Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:23.508643 containerd[1461]: time="2026-03-02T13:08:23.508480547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:23.509578 containerd[1461]: time="2026-03-02T13:08:23.509533857Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.802414098s" Mar 2 13:08:23.509714 containerd[1461]: time="2026-03-02T13:08:23.509581827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 2 13:08:23.510964 containerd[1461]: time="2026-03-02T13:08:23.510893005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 2 13:08:25.186374 containerd[1461]: time="2026-03-02T13:08:25.186182572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:25.187804 containerd[1461]: time="2026-03-02T13:08:25.187033971Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 2 13:08:25.188848 containerd[1461]: time="2026-03-02T13:08:25.188783068Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:25.194671 containerd[1461]: time="2026-03-02T13:08:25.194537814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:25.196556 containerd[1461]: time="2026-03-02T13:08:25.196490781Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.685558903s" Mar 2 13:08:25.196640 containerd[1461]: time="2026-03-02T13:08:25.196562706Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 2 13:08:25.198493 containerd[1461]: time="2026-03-02T13:08:25.198352107Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 2 13:08:26.938760 containerd[1461]: time="2026-03-02T13:08:26.938569366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:26.940142 containerd[1461]: time="2026-03-02T13:08:26.939547444Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 2 13:08:26.941070 containerd[1461]: time="2026-03-02T13:08:26.940939211Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:26.945010 containerd[1461]: time="2026-03-02T13:08:26.944956568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:26.946492 containerd[1461]: time="2026-03-02T13:08:26.946408246Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.748031033s" Mar 2 13:08:26.946492 containerd[1461]: time="2026-03-02T13:08:26.946462778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 2 13:08:26.947915 containerd[1461]: time="2026-03-02T13:08:26.947875814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 2 13:08:28.799005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177321334.mount: Deactivated successfully. Mar 2 13:08:29.890780 containerd[1461]: time="2026-03-02T13:08:29.890379826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:29.892132 containerd[1461]: time="2026-03-02T13:08:29.891262309Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 2 13:08:29.892407 containerd[1461]: time="2026-03-02T13:08:29.892298760Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:29.897223 containerd[1461]: time="2026-03-02T13:08:29.897173983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:29.898013 containerd[1461]: time="2026-03-02T13:08:29.897902940Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 2.949982331s" Mar 2 13:08:29.898013 containerd[1461]: time="2026-03-02T13:08:29.898003367Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 2 13:08:29.899478 containerd[1461]: time="2026-03-02T13:08:29.899247118Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 2 13:08:30.741197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491942612.mount: Deactivated successfully. Mar 2 13:08:32.892526 containerd[1461]: time="2026-03-02T13:08:32.892243408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:32.894771 containerd[1461]: time="2026-03-02T13:08:32.893205787Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 2 13:08:32.894995 containerd[1461]: time="2026-03-02T13:08:32.894918142Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:32.899209 containerd[1461]: time="2026-03-02T13:08:32.899160492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:32.901829 containerd[1461]: time="2026-03-02T13:08:32.901746563Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.002455643s" Mar 2 13:08:32.901829 containerd[1461]: time="2026-03-02T13:08:32.901813328Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 2 13:08:32.903726 containerd[1461]: time="2026-03-02T13:08:32.903705275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 13:08:33.342707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395409736.mount: Deactivated successfully. Mar 2 13:08:33.350804 containerd[1461]: time="2026-03-02T13:08:33.350720655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:33.352045 containerd[1461]: time="2026-03-02T13:08:33.351974496Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 13:08:33.354124 containerd[1461]: time="2026-03-02T13:08:33.354029063Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:33.358504 containerd[1461]: time="2026-03-02T13:08:33.358384217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:33.360842 containerd[1461]: time="2026-03-02T13:08:33.360705696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 456.910313ms" Mar 2 13:08:33.360842 containerd[1461]: time="2026-03-02T13:08:33.360830869Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 13:08:33.361977 containerd[1461]: time="2026-03-02T13:08:33.361886987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 2 13:08:33.499238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:08:33.508876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:34.194487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:34.215283 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:08:34.218334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660269383.mount: Deactivated successfully. Mar 2 13:08:34.505662 kubelet[1969]: E0302 13:08:34.505159 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:08:34.508696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:08:34.509010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:08:34.510033 systemd[1]: kubelet.service: Consumed 1.001s CPU time. Mar 2 13:08:36.430324 containerd[1461]: time="2026-03-02T13:08:36.430046758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:36.432201 containerd[1461]: time="2026-03-02T13:08:36.430856542Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 2 13:08:36.432413 containerd[1461]: time="2026-03-02T13:08:36.432334740Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:36.436550 containerd[1461]: time="2026-03-02T13:08:36.436428114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:36.438399 containerd[1461]: time="2026-03-02T13:08:36.438352850Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 3.076414166s" Mar 2 13:08:36.438399 containerd[1461]: time="2026-03-02T13:08:36.438397964Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 2 13:08:38.179855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:38.180066 systemd[1]: kubelet.service: Consumed 1.001s CPU time. Mar 2 13:08:38.195894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:38.227359 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit session-7.scope)... Mar 2 13:08:38.227408 systemd[1]: Reloading... Mar 2 13:08:38.372363 zram_generator::config[2109]: No configuration found. Mar 2 13:08:38.776799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:08:38.879141 systemd[1]: Reloading finished in 651 ms. Mar 2 13:08:38.965747 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 13:08:38.965871 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 13:08:38.966237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:38.983393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:39.214838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:39.236107 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:08:39.339927 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:08:39.579832 kubelet[2155]: I0302 13:08:39.579760 2155 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:08:39.579832 kubelet[2155]: I0302 13:08:39.579815 2155 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:08:39.579963 kubelet[2155]: I0302 13:08:39.579885 2155 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:08:39.579963 kubelet[2155]: I0302 13:08:39.579893 2155 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:08:39.580223 kubelet[2155]: I0302 13:08:39.580176 2155 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:08:39.611671 kubelet[2155]: E0302 13:08:39.611575 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:08:39.611988 kubelet[2155]: I0302 13:08:39.611873 2155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:08:39.618361 kubelet[2155]: E0302 13:08:39.618275 2155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:08:39.618361 kubelet[2155]: I0302 13:08:39.618362 2155 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:08:39.626033 kubelet[2155]: I0302 13:08:39.625964 2155 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:08:39.627367 kubelet[2155]: I0302 13:08:39.627290 2155 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:08:39.627517 kubelet[2155]: I0302 13:08:39.627339 2155 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:08:39.627751 kubelet[2155]: I0302 13:08:39.627551 2155 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:08:39.627751 kubelet[2155]: I0302 13:08:39.627561 2155 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:08:39.627751 kubelet[2155]: I0302 13:08:39.627720 2155 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:08:39.629523 kubelet[2155]: I0302 13:08:39.629477 2155 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:08:39.630044 kubelet[2155]: I0302 13:08:39.630001 2155 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:08:39.630044 kubelet[2155]: I0302 13:08:39.630034 2155 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:08:39.630220 kubelet[2155]: I0302 13:08:39.630175 2155 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:08:39.630308 kubelet[2155]: I0302 13:08:39.630280 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:08:39.633175 kubelet[2155]: I0302 13:08:39.633139 2155 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:08:39.643790 kubelet[2155]: I0302 13:08:39.643725 2155 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:08:39.643790 kubelet[2155]: I0302 13:08:39.643775 2155 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:08:39.643927 kubelet[2155]: W0302 13:08:39.643899 2155 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:08:39.648287 kubelet[2155]: I0302 13:08:39.648230 2155 server.go:1257] "Started kubelet" Mar 2 13:08:39.650423 kubelet[2155]: I0302 13:08:39.648435 2155 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:08:39.650423 kubelet[2155]: I0302 13:08:39.649191 2155 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:08:39.650423 kubelet[2155]: I0302 13:08:39.649285 2155 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:08:39.650423 kubelet[2155]: I0302 13:08:39.649569 2155 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:08:39.650423 kubelet[2155]: I0302 13:08:39.650025 2155 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:08:39.650423 kubelet[2155]: I0302 13:08:39.650180 2155 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:08:39.653367 kubelet[2155]: I0302 13:08:39.653316 2155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:08:39.655884 kubelet[2155]: E0302 13:08:39.654283 2155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899082d734f6aae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:08:39.648135854 +0000 UTC m=+0.403757238,LastTimestamp:2026-03-02 13:08:39.648135854 +0000 UTC m=+0.403757238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:08:39.657591 kubelet[2155]: E0302 13:08:39.657497 2155 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:08:39.657743 kubelet[2155]: I0302 13:08:39.657713 2155 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:08:39.657743 kubelet[2155]: I0302 13:08:39.657541 2155 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:08:39.657878 kubelet[2155]: E0302 13:08:39.657841 2155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:08:39.658155 kubelet[2155]: I0302 13:08:39.658108 2155 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:08:39.659148 kubelet[2155]: E0302 13:08:39.659036 2155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Mar 2 13:08:39.659689 kubelet[2155]: I0302 13:08:39.659577 2155 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:08:39.659843 kubelet[2155]: I0302 13:08:39.659792 2155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:08:39.664657 kubelet[2155]: I0302 13:08:39.662900 2155 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:08:39.691518 kubelet[2155]: I0302 13:08:39.691479 2155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:08:39.694728 kubelet[2155]: I0302 13:08:39.694681 2155 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:08:39.694728 kubelet[2155]: I0302 13:08:39.694715 2155 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:08:39.695435 kubelet[2155]: I0302 13:08:39.694842 2155 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:08:39.695435 kubelet[2155]: I0302 13:08:39.695126 2155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:08:39.695435 kubelet[2155]: I0302 13:08:39.695220 2155 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:08:39.695435 kubelet[2155]: I0302 13:08:39.695326 2155 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:08:39.695572 kubelet[2155]: E0302 13:08:39.695463 2155 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:08:39.698307 kubelet[2155]: I0302 13:08:39.698267 2155 policy_none.go:50] "Start" Mar 2 13:08:39.698589 kubelet[2155]: I0302 13:08:39.698555 2155 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:08:39.698687 kubelet[2155]: I0302 13:08:39.698590 2155 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:08:39.702882 kubelet[2155]: I0302 13:08:39.702865 2155 policy_none.go:44] "Start" Mar 2 13:08:39.709168 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:08:39.738813 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:08:39.745321 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:08:39.759317 kubelet[2155]: E0302 13:08:39.759191 2155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:08:39.765916 kubelet[2155]: E0302 13:08:39.765770 2155 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:08:39.768133 kubelet[2155]: I0302 13:08:39.766875 2155 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:08:39.768133 kubelet[2155]: I0302 13:08:39.766961 2155 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:08:39.768133 kubelet[2155]: I0302 13:08:39.767813 2155 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:08:39.772132 kubelet[2155]: E0302 13:08:39.772036 2155 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:08:39.772320 kubelet[2155]: E0302 13:08:39.772193 2155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:08:39.877860 kubelet[2155]: E0302 13:08:39.877029 2155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Mar 2 13:08:40.078334 kubelet[2155]: I0302 13:08:40.078079 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a59ebe7391a4d2fe758e2728f6bdad9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a59ebe7391a4d2fe758e2728f6bdad9\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:40.080927 kubelet[2155]: I0302 13:08:40.080206 2155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:08:40.081534 kubelet[2155]: E0302 13:08:40.081299 2155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Mar 2 13:08:40.140356 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 2 13:08:40.159548 kubelet[2155]: E0302 13:08:40.159435 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:40.164242 systemd[1]: Created slice kubepods-burstable-pod0a59ebe7391a4d2fe758e2728f6bdad9.slice - libcontainer container kubepods-burstable-pod0a59ebe7391a4d2fe758e2728f6bdad9.slice. Mar 2 13:08:40.166970 kubelet[2155]: E0302 13:08:40.166919 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:40.168809 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 2 13:08:40.170954 kubelet[2155]: E0302 13:08:40.170910 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:40.179339 kubelet[2155]: I0302 13:08:40.179258 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a59ebe7391a4d2fe758e2728f6bdad9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a59ebe7391a4d2fe758e2728f6bdad9\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:40.179339 kubelet[2155]: I0302 13:08:40.179300 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:40.179431 kubelet[2155]: I0302 13:08:40.179337 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:40.179431 kubelet[2155]: I0302 13:08:40.179370 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:40.179431 kubelet[2155]: I0302 13:08:40.179387 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:40.179496 kubelet[2155]: I0302 13:08:40.179442 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:40.179496 kubelet[2155]: I0302 13:08:40.179458 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a59ebe7391a4d2fe758e2728f6bdad9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a59ebe7391a4d2fe758e2728f6bdad9\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:40.179496 kubelet[2155]: I0302 13:08:40.179471 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:40.293354 kubelet[2155]: E0302 13:08:40.293162 2155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Mar 2 13:08:40.350357 kubelet[2155]: I0302 13:08:40.350156 2155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:08:40.352375 kubelet[2155]: E0302 13:08:40.351185 2155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Mar 2 13:08:40.489147 kubelet[2155]: E0302 13:08:40.488710 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:40.495209 containerd[1461]: time="2026-03-02T13:08:40.495005224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:40.499323 kubelet[2155]: E0302 13:08:40.499210 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:40.500449 containerd[1461]: time="2026-03-02T13:08:40.500225887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a59ebe7391a4d2fe758e2728f6bdad9,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:40.502956 kubelet[2155]: E0302 13:08:40.502834 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:40.503595 containerd[1461]: time="2026-03-02T13:08:40.503469581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:40.753442 kubelet[2155]: I0302 13:08:40.753398 2155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:08:40.754060 kubelet[2155]: E0302 13:08:40.753839 2155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Mar 2 13:08:41.098408 kubelet[2155]: E0302 13:08:41.098013 2155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" Mar 2 13:08:41.191075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927056430.mount: Deactivated successfully. Mar 2 13:08:41.198817 containerd[1461]: time="2026-03-02T13:08:41.198748399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:08:41.202453 containerd[1461]: time="2026-03-02T13:08:41.202298802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:08:41.203372 containerd[1461]: time="2026-03-02T13:08:41.203300687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:08:41.204417 containerd[1461]: time="2026-03-02T13:08:41.204347733Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:08:41.205713 containerd[1461]: time="2026-03-02T13:08:41.205670821Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:08:41.206861 containerd[1461]: time="2026-03-02T13:08:41.206821859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:08:41.207660 containerd[1461]: time="2026-03-02T13:08:41.207554731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:08:41.210413 containerd[1461]: time="2026-03-02T13:08:41.210360402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:08:41.212574 containerd[1461]: time="2026-03-02T13:08:41.212538774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 708.967586ms" Mar 2 13:08:41.213530 containerd[1461]: time="2026-03-02T13:08:41.213416662Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 713.066103ms" Mar 2 13:08:41.214317 containerd[1461]: time="2026-03-02T13:08:41.214133225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.812077ms" Mar 2 13:08:41.561969 kubelet[2155]: I0302 13:08:41.561823 2155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:08:41.562756 kubelet[2155]: E0302 13:08:41.562579 2155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Mar 2 13:08:41.774176 kubelet[2155]: E0302 13:08:41.773959 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:08:41.861128 containerd[1461]: time="2026-03-02T13:08:41.860466642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:41.861128 containerd[1461]: time="2026-03-02T13:08:41.860677652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:41.861128 containerd[1461]: time="2026-03-02T13:08:41.860705494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:41.861549 containerd[1461]: time="2026-03-02T13:08:41.861215007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:41.862420 containerd[1461]: time="2026-03-02T13:08:41.861792842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:41.862420 containerd[1461]: time="2026-03-02T13:08:41.861875054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:41.862420 containerd[1461]: time="2026-03-02T13:08:41.861899419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:41.862925 containerd[1461]: time="2026-03-02T13:08:41.862051500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:41.869429 containerd[1461]: time="2026-03-02T13:08:41.869308590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:41.869768 containerd[1461]: time="2026-03-02T13:08:41.869686159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:41.869876 containerd[1461]: time="2026-03-02T13:08:41.869762411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:41.870240 containerd[1461]: time="2026-03-02T13:08:41.870118860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:42.110939 systemd[1]: Started cri-containerd-2b040c98175b06d7d444cb230956d679a146149080a8c44c9125d754f47dde79.scope - libcontainer container 2b040c98175b06d7d444cb230956d679a146149080a8c44c9125d754f47dde79. Mar 2 13:08:42.113923 systemd[1]: Started cri-containerd-3670431a38919f61e290cfa6a9b815b5db1032198ef4a4db15c076f8e1acf4f8.scope - libcontainer container 3670431a38919f61e290cfa6a9b815b5db1032198ef4a4db15c076f8e1acf4f8. Mar 2 13:08:42.127257 systemd[1]: Started cri-containerd-7a5bc903ac197e43d05b72f9a78d8425b351ab97f6c9f31eeef1699fbc5fe8ea.scope - libcontainer container 7a5bc903ac197e43d05b72f9a78d8425b351ab97f6c9f31eeef1699fbc5fe8ea. Mar 2 13:08:42.574147 containerd[1461]: time="2026-03-02T13:08:42.573972923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b040c98175b06d7d444cb230956d679a146149080a8c44c9125d754f47dde79\"" Mar 2 13:08:42.578494 kubelet[2155]: E0302 13:08:42.578394 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:42.585509 containerd[1461]: time="2026-03-02T13:08:42.585393564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a59ebe7391a4d2fe758e2728f6bdad9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a5bc903ac197e43d05b72f9a78d8425b351ab97f6c9f31eeef1699fbc5fe8ea\"" Mar 2 13:08:42.587479 kubelet[2155]: E0302 13:08:42.587452 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:42.595715 containerd[1461]: time="2026-03-02T13:08:42.595422596Z" level=info msg="CreateContainer within sandbox \"2b040c98175b06d7d444cb230956d679a146149080a8c44c9125d754f47dde79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:08:42.599419 containerd[1461]: time="2026-03-02T13:08:42.599326307Z" level=info msg="CreateContainer within sandbox \"7a5bc903ac197e43d05b72f9a78d8425b351ab97f6c9f31eeef1699fbc5fe8ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:08:42.603350 containerd[1461]: time="2026-03-02T13:08:42.603291294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"3670431a38919f61e290cfa6a9b815b5db1032198ef4a4db15c076f8e1acf4f8\"" Mar 2 13:08:42.605401 kubelet[2155]: E0302 13:08:42.605211 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:42.635441 containerd[1461]: time="2026-03-02T13:08:42.630346566Z" level=info msg="CreateContainer within sandbox \"3670431a38919f61e290cfa6a9b815b5db1032198ef4a4db15c076f8e1acf4f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:08:42.634578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241763386.mount: Deactivated successfully. Mar 2 13:08:42.647192 containerd[1461]: time="2026-03-02T13:08:42.647086406Z" level=info msg="CreateContainer within sandbox \"2b040c98175b06d7d444cb230956d679a146149080a8c44c9125d754f47dde79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3d9bc3798f96da007e11713306c629a48a07d7ac64fff80ce518c88a295afe7\"" Mar 2 13:08:42.648196 containerd[1461]: time="2026-03-02T13:08:42.648167532Z" level=info msg="StartContainer for \"d3d9bc3798f96da007e11713306c629a48a07d7ac64fff80ce518c88a295afe7\"" Mar 2 13:08:42.648451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903618342.mount: Deactivated successfully. Mar 2 13:08:42.657209 containerd[1461]: time="2026-03-02T13:08:42.657063985Z" level=info msg="CreateContainer within sandbox \"7a5bc903ac197e43d05b72f9a78d8425b351ab97f6c9f31eeef1699fbc5fe8ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31c14321d830975b9e9e84fa2c57eadcef62ebd9702e80d02cab272de65923c7\"" Mar 2 13:08:42.657872 containerd[1461]: time="2026-03-02T13:08:42.657711515Z" level=info msg="StartContainer for \"31c14321d830975b9e9e84fa2c57eadcef62ebd9702e80d02cab272de65923c7\"" Mar 2 13:08:42.662868 containerd[1461]: time="2026-03-02T13:08:42.662772537Z" level=info msg="CreateContainer within sandbox \"3670431a38919f61e290cfa6a9b815b5db1032198ef4a4db15c076f8e1acf4f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"726123b8fb74c763212d1f70a2e78eac3442a1158f1f5a7a1295217deb087c05\"" Mar 2 13:08:42.664429 containerd[1461]: time="2026-03-02T13:08:42.663284258Z" level=info msg="StartContainer for \"726123b8fb74c763212d1f70a2e78eac3442a1158f1f5a7a1295217deb087c05\"" Mar 2 13:08:42.809456 kubelet[2155]: E0302 13:08:42.808935 2155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="3.2s" Mar 2 13:08:42.921471 systemd[1]: Started cri-containerd-31c14321d830975b9e9e84fa2c57eadcef62ebd9702e80d02cab272de65923c7.scope - libcontainer container 31c14321d830975b9e9e84fa2c57eadcef62ebd9702e80d02cab272de65923c7. Mar 2 13:08:42.929876 systemd[1]: Started cri-containerd-726123b8fb74c763212d1f70a2e78eac3442a1158f1f5a7a1295217deb087c05.scope - libcontainer container 726123b8fb74c763212d1f70a2e78eac3442a1158f1f5a7a1295217deb087c05. Mar 2 13:08:42.931753 systemd[1]: Started cri-containerd-d3d9bc3798f96da007e11713306c629a48a07d7ac64fff80ce518c88a295afe7.scope - libcontainer container d3d9bc3798f96da007e11713306c629a48a07d7ac64fff80ce518c88a295afe7. Mar 2 13:08:43.012544 containerd[1461]: time="2026-03-02T13:08:43.012468327Z" level=info msg="StartContainer for \"726123b8fb74c763212d1f70a2e78eac3442a1158f1f5a7a1295217deb087c05\" returns successfully" Mar 2 13:08:43.013217 containerd[1461]: time="2026-03-02T13:08:43.012591325Z" level=info msg="StartContainer for \"d3d9bc3798f96da007e11713306c629a48a07d7ac64fff80ce518c88a295afe7\" returns successfully" Mar 2 13:08:43.042662 containerd[1461]: time="2026-03-02T13:08:43.040833922Z" level=info msg="StartContainer for \"31c14321d830975b9e9e84fa2c57eadcef62ebd9702e80d02cab272de65923c7\" returns successfully" Mar 2 13:08:43.166654 kubelet[2155]: I0302 13:08:43.166430 2155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:08:43.167522 kubelet[2155]: E0302 13:08:43.167455 2155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Mar 2 13:08:44.112477 kubelet[2155]: E0302 13:08:44.112342 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:44.119105 kubelet[2155]: E0302 13:08:44.115731 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:44.158229 kubelet[2155]: E0302 13:08:44.157537 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:44.158229 kubelet[2155]: E0302 13:08:44.157891 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:44.162090 kubelet[2155]: E0302 13:08:44.161973 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:44.162237 kubelet[2155]: E0302 13:08:44.162198 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:45.297819 kubelet[2155]: E0302 13:08:45.297680 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:45.299418 kubelet[2155]: E0302 13:08:45.298212 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:45.299418 kubelet[2155]: E0302 13:08:45.298544 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:45.299418 kubelet[2155]: E0302 13:08:45.298716 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:45.299418 kubelet[2155]: E0302 13:08:45.299119 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:45.299418 kubelet[2155]: E0302 13:08:45.299204 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:46.306299 kubelet[2155]: E0302 13:08:46.304209 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:46.306299 kubelet[2155]: E0302 13:08:46.305780 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:46.306299 kubelet[2155]: E0302 13:08:46.306178 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:46.306299 kubelet[2155]: E0302 13:08:46.306415 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:46.496777 kubelet[2155]: I0302 13:08:46.496658 2155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:08:47.313130 kubelet[2155]: E0302 13:08:47.313077 2155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:47.314110 kubelet[2155]: E0302 13:08:47.313328 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:47.383061 kubelet[2155]: E0302 13:08:47.383015 2155 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:08:47.561802 kubelet[2155]: I0302 13:08:47.555723 2155 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 13:08:47.561802 kubelet[2155]: I0302 13:08:47.561352 2155 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:47.615219 kubelet[2155]: E0302 13:08:47.614753 2155 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:47.615219 kubelet[2155]: I0302 13:08:47.614828 2155 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:47.625939 kubelet[2155]: E0302 13:08:47.625767 2155 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:47.625939 kubelet[2155]: I0302 13:08:47.625874 2155 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:47.634171 kubelet[2155]: E0302 13:08:47.634148 2155 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:47.840815 kubelet[2155]: I0302 13:08:47.839795 2155 apiserver.go:52] "Watching apiserver" Mar 2 13:08:47.962052 kubelet[2155]: I0302 13:08:47.960713 2155 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:08:49.852136 systemd[1]: Reloading requested from client PID 2443 ('systemctl') (unit session-7.scope)... Mar 2 13:08:49.852171 systemd[1]: Reloading... Mar 2 13:08:49.980555 zram_generator::config[2483]: No configuration found. Mar 2 13:08:50.356773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:08:50.455694 systemd[1]: Reloading finished in 603 ms. Mar 2 13:08:50.512136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:50.526062 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:08:50.526431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:50.526526 systemd[1]: kubelet.service: Consumed 6.185s CPU time, 130.1M memory peak, 0B memory swap peak. Mar 2 13:08:50.546005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:50.730994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:50.737001 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:08:50.810293 kubelet[2528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:08:50.819271 kubelet[2528]: I0302 13:08:50.819196 2528 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:08:50.819271 kubelet[2528]: I0302 13:08:50.819254 2528 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:08:50.819271 kubelet[2528]: I0302 13:08:50.819273 2528 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:08:50.819271 kubelet[2528]: I0302 13:08:50.819279 2528 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:08:50.819548 kubelet[2528]: I0302 13:08:50.819508 2528 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:08:50.820773 kubelet[2528]: I0302 13:08:50.820733 2528 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:08:50.826247 kubelet[2528]: I0302 13:08:50.826209 2528 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:08:50.835686 kubelet[2528]: E0302 13:08:50.834576 2528 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:08:50.835686 kubelet[2528]: I0302 13:08:50.834698 2528 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:08:50.844351 kubelet[2528]: I0302 13:08:50.844333 2528 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:08:50.844768 kubelet[2528]: I0302 13:08:50.844724 2528 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:08:50.844908 kubelet[2528]: I0302 13:08:50.844759 2528 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:08:50.844908 kubelet[2528]: I0302 13:08:50.844905 2528 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:08:50.845185 kubelet[2528]: I0302 13:08:50.844914 2528 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:08:50.845185 kubelet[2528]: I0302 13:08:50.844974 2528 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:08:50.845235 kubelet[2528]: I0302 13:08:50.845203 2528 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:08:50.845544 kubelet[2528]: I0302 13:08:50.845494 2528 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:08:50.845544 kubelet[2528]: I0302 13:08:50.845538 2528 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:08:50.845636 kubelet[2528]: I0302 13:08:50.845563 2528 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:08:50.845636 kubelet[2528]: I0302 13:08:50.845576 2528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:08:50.847901 kubelet[2528]: I0302 13:08:50.847817 2528 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:08:50.850651 kubelet[2528]: I0302 13:08:50.849095 2528 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:08:50.850651 kubelet[2528]: I0302 13:08:50.849144 2528 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:08:50.860683 kubelet[2528]: I0302 13:08:50.859500 2528 server.go:1257] "Started kubelet" Mar 2 13:08:50.860777 kubelet[2528]: I0302 13:08:50.860693 2528 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:08:50.862125 kubelet[2528]: I0302 13:08:50.861102 2528 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:08:50.862125 kubelet[2528]: I0302 13:08:50.861238 2528 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:08:50.862125 kubelet[2528]: I0302 13:08:50.861983 2528 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:08:50.863128 kubelet[2528]: I0302 13:08:50.863075 2528 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:08:50.871019 kubelet[2528]: I0302 13:08:50.870965 2528 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:08:50.872959 kubelet[2528]: I0302 13:08:50.872892 2528 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:08:50.873207 kubelet[2528]: I0302 13:08:50.873130 2528 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:08:50.873650 kubelet[2528]: I0302 13:08:50.873551 2528 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:08:50.873971 kubelet[2528]: I0302 13:08:50.873910 2528 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:08:50.876483 kubelet[2528]: I0302 13:08:50.876416 2528 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:08:50.876818 kubelet[2528]: I0302 13:08:50.876730 2528 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:08:50.879443 kubelet[2528]: I0302 13:08:50.879370 2528 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:08:50.900901 kubelet[2528]: I0302 13:08:50.900468 2528 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:08:50.903431 update_engine[1452]: I20260302 13:08:50.903333 1452 update_attempter.cc:509] Updating boot flags... Mar 2 13:08:50.912905 kubelet[2528]: I0302 13:08:50.912809 2528 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:08:50.912905 kubelet[2528]: I0302 13:08:50.912857 2528 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:08:50.912905 kubelet[2528]: I0302 13:08:50.912881 2528 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:08:50.913085 kubelet[2528]: E0302 13:08:50.912982 2528 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:08:50.949414 kubelet[2528]: I0302 13:08:50.949351 2528 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:08:50.949414 kubelet[2528]: I0302 13:08:50.949369 2528 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:08:50.949414 kubelet[2528]: I0302 13:08:50.949389 2528 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:08:50.949722 kubelet[2528]: I0302 13:08:50.949580 2528 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 2 13:08:50.949722 kubelet[2528]: I0302 13:08:50.949595 2528 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 2 13:08:50.949722 kubelet[2528]: I0302 13:08:50.949659 2528 policy_none.go:50] "Start" Mar 2 13:08:50.949722 kubelet[2528]: I0302 13:08:50.949668 2528 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:08:50.949722 kubelet[2528]: I0302 13:08:50.949680 2528 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:08:50.950005 kubelet[2528]: I0302 13:08:50.949803 2528 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 13:08:50.950005 kubelet[2528]: I0302 13:08:50.949811 2528 policy_none.go:44] "Start" Mar 2 13:08:50.956542 kubelet[2528]: E0302 13:08:50.955842 2528 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:08:50.956542 kubelet[2528]: I0302 13:08:50.956171 2528 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:08:50.956542 kubelet[2528]: I0302 13:08:50.956186 2528 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:08:50.957684 kubelet[2528]: I0302 13:08:50.956762 2528 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:08:50.958375 kubelet[2528]: E0302 13:08:50.958334 2528 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:08:51.004694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2581) Mar 2 13:08:51.014100 kubelet[2528]: I0302 13:08:51.013864 2528 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:51.018349 kubelet[2528]: I0302 13:08:51.018057 2528 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:51.018432 kubelet[2528]: I0302 13:08:51.018351 2528 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:51.076532 kubelet[2528]: I0302 13:08:51.075830 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:51.076532 kubelet[2528]: I0302 13:08:51.076079 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a59ebe7391a4d2fe758e2728f6bdad9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a59ebe7391a4d2fe758e2728f6bdad9\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:51.076696 kubelet[2528]: I0302 13:08:51.076344 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:51.078000 kubelet[2528]: I0302 13:08:51.077259 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:51.078000 kubelet[2528]: I0302 13:08:51.077533 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:51.078000 kubelet[2528]: I0302 13:08:51.077566 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:51.078000 kubelet[2528]: I0302 13:08:51.077653 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a59ebe7391a4d2fe758e2728f6bdad9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a59ebe7391a4d2fe758e2728f6bdad9\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:51.078000 kubelet[2528]: I0302 13:08:51.077680 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a59ebe7391a4d2fe758e2728f6bdad9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a59ebe7391a4d2fe758e2728f6bdad9\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:51.080386 kubelet[2528]: I0302 13:08:51.077708 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:51.084055 kubelet[2528]: I0302 13:08:51.083409 2528 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:08:51.100362 kubelet[2528]: I0302 13:08:51.100231 2528 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 2 13:08:51.100362 kubelet[2528]: I0302 13:08:51.100352 2528 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 13:08:51.102026 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2584) Mar 2 13:08:51.396459 kubelet[2528]: E0302 13:08:51.348696 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:51.440305 kubelet[2528]: E0302 13:08:51.438888 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:51.445035 kubelet[2528]: E0302 13:08:51.444752 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:51.481713 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2584) Mar 2 13:08:51.849118 kubelet[2528]: I0302 13:08:51.847904 2528 apiserver.go:52] "Watching apiserver" Mar 2 13:08:51.875008 kubelet[2528]: I0302 13:08:51.874866 2528 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:08:52.028894 kubelet[2528]: E0302 13:08:52.028587 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:52.029365 kubelet[2528]: I0302 13:08:52.029044 2528 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:52.029450 kubelet[2528]: E0302 13:08:52.029429 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:52.039510 kubelet[2528]: E0302 13:08:52.038745 2528 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:52.039510 kubelet[2528]: E0302 13:08:52.038883 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:52.092888 kubelet[2528]: I0302 13:08:52.092836 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.0927911 podStartE2EDuration="1.0927911s" podCreationTimestamp="2026-03-02 13:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:52.081117748 +0000 UTC m=+1.338859107" watchObservedRunningTime="2026-03-02 13:08:52.0927911 +0000 UTC m=+1.350532457" Mar 2 13:08:52.115537 kubelet[2528]: I0302 13:08:52.113837 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.113818593 podStartE2EDuration="1.113818593s" podCreationTimestamp="2026-03-02 13:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:52.092965283 +0000 UTC m=+1.350706641" watchObservedRunningTime="2026-03-02 13:08:52.113818593 +0000 UTC m=+1.371559952" Mar 2 13:08:52.152765 kubelet[2528]: I0302 13:08:52.150103 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.150085844 podStartE2EDuration="1.150085844s" podCreationTimestamp="2026-03-02 13:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:52.1137889 +0000 UTC m=+1.371530298" watchObservedRunningTime="2026-03-02 13:08:52.150085844 +0000 UTC m=+1.407827212" Mar 2 13:08:53.031219 kubelet[2528]: E0302 13:08:53.031051 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:53.032275 kubelet[2528]: E0302 13:08:53.031877 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:54.033184 kubelet[2528]: E0302 13:08:54.033102 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:54.033184 kubelet[2528]: E0302 13:08:54.033123 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:54.563994 kubelet[2528]: I0302 13:08:54.563926 2528 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:08:54.564569 containerd[1461]: time="2026-03-02T13:08:54.564532690Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:08:54.565717 kubelet[2528]: I0302 13:08:54.565157 2528 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:08:55.530973 systemd[1]: Created slice kubepods-besteffort-pod27bdcd8a_5310_4f05_b467_67eaaf02b471.slice - libcontainer container kubepods-besteffort-pod27bdcd8a_5310_4f05_b467_67eaaf02b471.slice. Mar 2 13:08:55.564129 kubelet[2528]: I0302 13:08:55.564058 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27bdcd8a-5310-4f05-b467-67eaaf02b471-xtables-lock\") pod \"kube-proxy-ds52s\" (UID: \"27bdcd8a-5310-4f05-b467-67eaaf02b471\") " pod="kube-system/kube-proxy-ds52s" Mar 2 13:08:55.564504 kubelet[2528]: I0302 13:08:55.564140 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27bdcd8a-5310-4f05-b467-67eaaf02b471-lib-modules\") pod \"kube-proxy-ds52s\" (UID: \"27bdcd8a-5310-4f05-b467-67eaaf02b471\") " pod="kube-system/kube-proxy-ds52s" Mar 2 13:08:55.564504 kubelet[2528]: I0302 13:08:55.564175 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27bdcd8a-5310-4f05-b467-67eaaf02b471-kube-proxy\") pod \"kube-proxy-ds52s\" (UID: \"27bdcd8a-5310-4f05-b467-67eaaf02b471\") " pod="kube-system/kube-proxy-ds52s" Mar 2 13:08:55.564504 kubelet[2528]: I0302 13:08:55.564204 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smt97\" (UniqueName: \"kubernetes.io/projected/27bdcd8a-5310-4f05-b467-67eaaf02b471-kube-api-access-smt97\") pod \"kube-proxy-ds52s\" (UID: \"27bdcd8a-5310-4f05-b467-67eaaf02b471\") " pod="kube-system/kube-proxy-ds52s" Mar 2 13:08:55.838362 systemd[1]: Created slice kubepods-besteffort-pode5909d62_97b7_42f7_ad09_5f3abbfa72bb.slice - libcontainer container kubepods-besteffort-pode5909d62_97b7_42f7_ad09_5f3abbfa72bb.slice. Mar 2 13:08:55.858532 kubelet[2528]: E0302 13:08:55.858445 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:55.859285 containerd[1461]: time="2026-03-02T13:08:55.859242860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ds52s,Uid:27bdcd8a-5310-4f05-b467-67eaaf02b471,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:55.866223 kubelet[2528]: I0302 13:08:55.866173 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5909d62-97b7-42f7-ad09-5f3abbfa72bb-var-lib-calico\") pod \"tigera-operator-6447996989-6smjn\" (UID: \"e5909d62-97b7-42f7-ad09-5f3abbfa72bb\") " pod="tigera-operator/tigera-operator-6447996989-6smjn" Mar 2 13:08:55.866223 kubelet[2528]: I0302 13:08:55.866216 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcrg\" (UniqueName: \"kubernetes.io/projected/e5909d62-97b7-42f7-ad09-5f3abbfa72bb-kube-api-access-cmcrg\") pod \"tigera-operator-6447996989-6smjn\" (UID: \"e5909d62-97b7-42f7-ad09-5f3abbfa72bb\") " pod="tigera-operator/tigera-operator-6447996989-6smjn" Mar 2 13:08:55.898484 containerd[1461]: time="2026-03-02T13:08:55.898153850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:55.898484 containerd[1461]: time="2026-03-02T13:08:55.898348943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:55.898665 containerd[1461]: time="2026-03-02T13:08:55.898561381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:55.898949 containerd[1461]: time="2026-03-02T13:08:55.898860165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:55.936856 systemd[1]: Started cri-containerd-152c102b5ea39f4bf83957af614219b6985f0cc8e06386ee309554df74581265.scope - libcontainer container 152c102b5ea39f4bf83957af614219b6985f0cc8e06386ee309554df74581265. Mar 2 13:08:55.964110 containerd[1461]: time="2026-03-02T13:08:55.964067927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ds52s,Uid:27bdcd8a-5310-4f05-b467-67eaaf02b471,Namespace:kube-system,Attempt:0,} returns sandbox id \"152c102b5ea39f4bf83957af614219b6985f0cc8e06386ee309554df74581265\"" Mar 2 13:08:55.964799 kubelet[2528]: E0302 13:08:55.964767 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:55.970645 containerd[1461]: time="2026-03-02T13:08:55.970565035Z" level=info msg="CreateContainer within sandbox \"152c102b5ea39f4bf83957af614219b6985f0cc8e06386ee309554df74581265\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:08:55.989477 containerd[1461]: time="2026-03-02T13:08:55.989420183Z" level=info msg="CreateContainer within sandbox \"152c102b5ea39f4bf83957af614219b6985f0cc8e06386ee309554df74581265\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05a22405266bbe6691c4b2101ebd9b889f1b9e4889acfe7312f0b1c8f2d21e8c\"" Mar 2 13:08:55.990137 containerd[1461]: time="2026-03-02T13:08:55.990049902Z" level=info msg="StartContainer for \"05a22405266bbe6691c4b2101ebd9b889f1b9e4889acfe7312f0b1c8f2d21e8c\"" Mar 2 13:08:56.030784 systemd[1]: Started cri-containerd-05a22405266bbe6691c4b2101ebd9b889f1b9e4889acfe7312f0b1c8f2d21e8c.scope - libcontainer container 05a22405266bbe6691c4b2101ebd9b889f1b9e4889acfe7312f0b1c8f2d21e8c. Mar 2 13:08:56.065410 containerd[1461]: time="2026-03-02T13:08:56.065326326Z" level=info msg="StartContainer for \"05a22405266bbe6691c4b2101ebd9b889f1b9e4889acfe7312f0b1c8f2d21e8c\" returns successfully" Mar 2 13:08:56.144483 containerd[1461]: time="2026-03-02T13:08:56.144361836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-6smjn,Uid:e5909d62-97b7-42f7-ad09-5f3abbfa72bb,Namespace:tigera-operator,Attempt:0,}" Mar 2 13:08:56.172462 containerd[1461]: time="2026-03-02T13:08:56.172302664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:56.172462 containerd[1461]: time="2026-03-02T13:08:56.172392521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:56.172462 containerd[1461]: time="2026-03-02T13:08:56.172405265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:56.172462 containerd[1461]: time="2026-03-02T13:08:56.172499090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:56.190780 systemd[1]: Started cri-containerd-f432ba1bd19792ab3602f2c36e19129ba3c353e5dcdd422afbea3d0e20d673ef.scope - libcontainer container f432ba1bd19792ab3602f2c36e19129ba3c353e5dcdd422afbea3d0e20d673ef. Mar 2 13:08:56.247450 containerd[1461]: time="2026-03-02T13:08:56.247359060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-6smjn,Uid:e5909d62-97b7-42f7-ad09-5f3abbfa72bb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f432ba1bd19792ab3602f2c36e19129ba3c353e5dcdd422afbea3d0e20d673ef\"" Mar 2 13:08:56.249664 containerd[1461]: time="2026-03-02T13:08:56.249548698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 13:08:56.974373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085405059.mount: Deactivated successfully. Mar 2 13:08:57.042443 kubelet[2528]: E0302 13:08:57.042375 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:57.054697 kubelet[2528]: I0302 13:08:57.054004 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ds52s" podStartSLOduration=2.053987225 podStartE2EDuration="2.053987225s" podCreationTimestamp="2026-03-02 13:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:57.053507039 +0000 UTC m=+6.311248397" watchObservedRunningTime="2026-03-02 13:08:57.053987225 +0000 UTC m=+6.311728583" Mar 2 13:08:58.416789 kubelet[2528]: E0302 13:08:58.415099 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:58.434939 kubelet[2528]: E0302 13:08:58.434863 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:59.130771 kubelet[2528]: E0302 13:08:59.130368 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:00.383911 containerd[1461]: time="2026-03-02T13:09:00.383663545Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:00.384761 containerd[1461]: time="2026-03-02T13:09:00.384418414Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=40822719" Mar 2 13:09:00.385688 containerd[1461]: time="2026-03-02T13:09:00.385585585Z" level=info msg="ImageCreate event name:\"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:00.388308 containerd[1461]: time="2026-03-02T13:09:00.388252950Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:00.389445 containerd[1461]: time="2026-03-02T13:09:00.389406034Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"40818714\" in 4.139815157s" Mar 2 13:09:00.389577 containerd[1461]: time="2026-03-02T13:09:00.389536897Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\"" Mar 2 13:09:00.396548 containerd[1461]: time="2026-03-02T13:09:00.396515925Z" level=info msg="CreateContainer within sandbox \"f432ba1bd19792ab3602f2c36e19129ba3c353e5dcdd422afbea3d0e20d673ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 13:09:00.414999 containerd[1461]: time="2026-03-02T13:09:00.414572121Z" level=info msg="CreateContainer within sandbox \"f432ba1bd19792ab3602f2c36e19129ba3c353e5dcdd422afbea3d0e20d673ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cbc2f775ff09d5ba0d1250116dae5ff8552e6ecdd27c53cb7266bc99f48a4887\"" Mar 2 13:09:00.418270 containerd[1461]: time="2026-03-02T13:09:00.418173859Z" level=info msg="StartContainer for \"cbc2f775ff09d5ba0d1250116dae5ff8552e6ecdd27c53cb7266bc99f48a4887\"" Mar 2 13:09:00.470776 systemd[1]: Started cri-containerd-cbc2f775ff09d5ba0d1250116dae5ff8552e6ecdd27c53cb7266bc99f48a4887.scope - libcontainer container cbc2f775ff09d5ba0d1250116dae5ff8552e6ecdd27c53cb7266bc99f48a4887. Mar 2 13:09:00.520545 containerd[1461]: time="2026-03-02T13:09:00.520436509Z" level=info msg="StartContainer for \"cbc2f775ff09d5ba0d1250116dae5ff8552e6ecdd27c53cb7266bc99f48a4887\" returns successfully" Mar 2 13:09:03.526334 kubelet[2528]: E0302 13:09:03.526296 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:03.578852 kubelet[2528]: I0302 13:09:03.578185 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6447996989-6smjn" podStartSLOduration=4.43543803 podStartE2EDuration="8.578164651s" podCreationTimestamp="2026-03-02 13:08:55 +0000 UTC" firstStartedPulling="2026-03-02 13:08:56.249049714 +0000 UTC m=+5.506791073" lastFinishedPulling="2026-03-02 13:09:00.391776325 +0000 UTC m=+9.649517694" observedRunningTime="2026-03-02 13:09:01.491671518 +0000 UTC m=+10.749412877" watchObservedRunningTime="2026-03-02 13:09:03.578164651 +0000 UTC m=+12.835906010" Mar 2 13:09:06.943102 sudo[1638]: pam_unix(sudo:session): session closed for user root Mar 2 13:09:06.946838 sshd[1634]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:06.953766 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:49576.service: Deactivated successfully. Mar 2 13:09:06.960271 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:09:06.960487 systemd[1]: session-7.scope: Consumed 8.039s CPU time, 158.5M memory peak, 0B memory swap peak. Mar 2 13:09:06.963184 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:09:06.969946 systemd-logind[1450]: Removed session 7. Mar 2 13:09:07.995286 kubelet[2528]: E0302 13:09:07.995207 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:09.132574 kubelet[2528]: E0302 13:09:09.132523 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:09.376965 systemd[1]: Created slice kubepods-besteffort-podefe7e156_172f_408f_8619_ad53acbb95d1.slice - libcontainer container kubepods-besteffort-podefe7e156_172f_408f_8619_ad53acbb95d1.slice. Mar 2 13:09:09.391523 systemd[1]: Created slice kubepods-besteffort-pod56b738a4_5aaf_43be_80f7_22815f249c98.slice - libcontainer container kubepods-besteffort-pod56b738a4_5aaf_43be_80f7_22815f249c98.slice. Mar 2 13:09:09.455108 kubelet[2528]: I0302 13:09:09.454979 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-bpffs\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.455108 kubelet[2528]: I0302 13:09:09.455067 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-cni-bin-dir\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.455108 kubelet[2528]: I0302 13:09:09.455095 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-policysync\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.455403 kubelet[2528]: I0302 13:09:09.455246 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-sys-fs\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.455403 kubelet[2528]: I0302 13:09:09.455351 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-var-lib-calico\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.455478 kubelet[2528]: I0302 13:09:09.455411 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efe7e156-172f-408f-8619-ad53acbb95d1-tigera-ca-bundle\") pod \"calico-typha-759db96b68-4dkgf\" (UID: \"efe7e156-172f-408f-8619-ad53acbb95d1\") " pod="calico-system/calico-typha-759db96b68-4dkgf" Mar 2 13:09:09.455921 kubelet[2528]: I0302 13:09:09.455766 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqmdz\" (UniqueName: \"kubernetes.io/projected/efe7e156-172f-408f-8619-ad53acbb95d1-kube-api-access-tqmdz\") pod \"calico-typha-759db96b68-4dkgf\" (UID: \"efe7e156-172f-408f-8619-ad53acbb95d1\") " pod="calico-system/calico-typha-759db96b68-4dkgf" Mar 2 13:09:09.457176 kubelet[2528]: I0302 13:09:09.457043 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56b738a4-5aaf-43be-80f7-22815f249c98-node-certs\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457176 kubelet[2528]: I0302 13:09:09.457071 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-xtables-lock\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457176 kubelet[2528]: I0302 13:09:09.457126 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/efe7e156-172f-408f-8619-ad53acbb95d1-typha-certs\") pod \"calico-typha-759db96b68-4dkgf\" (UID: \"efe7e156-172f-408f-8619-ad53acbb95d1\") " pod="calico-system/calico-typha-759db96b68-4dkgf" Mar 2 13:09:09.457484 kubelet[2528]: I0302 13:09:09.457202 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-flexvol-driver-host\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457484 kubelet[2528]: I0302 13:09:09.457228 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b738a4-5aaf-43be-80f7-22815f249c98-tigera-ca-bundle\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457484 kubelet[2528]: I0302 13:09:09.457262 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-lib-modules\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457484 kubelet[2528]: I0302 13:09:09.457286 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-nodeproc\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457484 kubelet[2528]: I0302 13:09:09.457308 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-cni-log-dir\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457662 kubelet[2528]: I0302 13:09:09.457330 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-cni-net-dir\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457662 kubelet[2528]: I0302 13:09:09.457342 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56b738a4-5aaf-43be-80f7-22815f249c98-var-run-calico\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.457662 kubelet[2528]: I0302 13:09:09.457356 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswl2\" (UniqueName: \"kubernetes.io/projected/56b738a4-5aaf-43be-80f7-22815f249c98-kube-api-access-wswl2\") pod \"calico-node-gdvv9\" (UID: \"56b738a4-5aaf-43be-80f7-22815f249c98\") " pod="calico-system/calico-node-gdvv9" Mar 2 13:09:09.470492 kubelet[2528]: E0302 13:09:09.470006 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:09.558728 kubelet[2528]: I0302 13:09:09.558567 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c237a7a5-5d24-43e5-a73e-bee8dafc330d-kubelet-dir\") pod \"csi-node-driver-dpl5p\" (UID: \"c237a7a5-5d24-43e5-a73e-bee8dafc330d\") " pod="calico-system/csi-node-driver-dpl5p" Mar 2 13:09:09.558728 kubelet[2528]: I0302 13:09:09.558693 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c237a7a5-5d24-43e5-a73e-bee8dafc330d-varrun\") pod \"csi-node-driver-dpl5p\" (UID: \"c237a7a5-5d24-43e5-a73e-bee8dafc330d\") " pod="calico-system/csi-node-driver-dpl5p" Mar 2 13:09:09.559329 kubelet[2528]: I0302 13:09:09.559163 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c237a7a5-5d24-43e5-a73e-bee8dafc330d-socket-dir\") pod \"csi-node-driver-dpl5p\" (UID: \"c237a7a5-5d24-43e5-a73e-bee8dafc330d\") " pod="calico-system/csi-node-driver-dpl5p" Mar 2 13:09:09.559809 kubelet[2528]: I0302 13:09:09.559774 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c237a7a5-5d24-43e5-a73e-bee8dafc330d-registration-dir\") pod \"csi-node-driver-dpl5p\" (UID: \"c237a7a5-5d24-43e5-a73e-bee8dafc330d\") " pod="calico-system/csi-node-driver-dpl5p" Mar 2 13:09:09.559912 kubelet[2528]: I0302 13:09:09.559844 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9sfr\" (UniqueName: \"kubernetes.io/projected/c237a7a5-5d24-43e5-a73e-bee8dafc330d-kube-api-access-j9sfr\") pod \"csi-node-driver-dpl5p\" (UID: \"c237a7a5-5d24-43e5-a73e-bee8dafc330d\") " pod="calico-system/csi-node-driver-dpl5p" Mar 2 13:09:09.562070 kubelet[2528]: E0302 13:09:09.561953 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.562070 kubelet[2528]: W0302 13:09:09.562049 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.562304 kubelet[2528]: E0302 13:09:09.562190 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.562681 kubelet[2528]: E0302 13:09:09.562565 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.562681 kubelet[2528]: W0302 13:09:09.562577 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.562681 kubelet[2528]: E0302 13:09:09.562588 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.563075 kubelet[2528]: E0302 13:09:09.562981 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.563075 kubelet[2528]: W0302 13:09:09.563016 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.563075 kubelet[2528]: E0302 13:09:09.563032 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.563474 kubelet[2528]: E0302 13:09:09.563374 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.563474 kubelet[2528]: W0302 13:09:09.563401 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.563474 kubelet[2528]: E0302 13:09:09.563414 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.563940 kubelet[2528]: E0302 13:09:09.563915 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.563940 kubelet[2528]: W0302 13:09:09.563931 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.564100 kubelet[2528]: E0302 13:09:09.563944 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.564584 kubelet[2528]: E0302 13:09:09.564488 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.564584 kubelet[2528]: W0302 13:09:09.564502 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.564584 kubelet[2528]: E0302 13:09:09.564516 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.565516 kubelet[2528]: E0302 13:09:09.565468 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.565516 kubelet[2528]: W0302 13:09:09.565504 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.565516 kubelet[2528]: E0302 13:09:09.565516 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.566223 kubelet[2528]: E0302 13:09:09.566184 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.566223 kubelet[2528]: W0302 13:09:09.566216 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.566223 kubelet[2528]: E0302 13:09:09.566233 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.566906 kubelet[2528]: E0302 13:09:09.566887 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.567105 kubelet[2528]: W0302 13:09:09.566991 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.567195 kubelet[2528]: E0302 13:09:09.567181 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.568744 kubelet[2528]: E0302 13:09:09.568588 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.568744 kubelet[2528]: W0302 13:09:09.568699 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.568744 kubelet[2528]: E0302 13:09:09.568720 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.569808 kubelet[2528]: E0302 13:09:09.569724 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.569808 kubelet[2528]: W0302 13:09:09.569769 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.569808 kubelet[2528]: E0302 13:09:09.569785 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.571012 kubelet[2528]: E0302 13:09:09.570935 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.571012 kubelet[2528]: W0302 13:09:09.570973 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.571012 kubelet[2528]: E0302 13:09:09.570990 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.572064 kubelet[2528]: E0302 13:09:09.572010 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.572064 kubelet[2528]: W0302 13:09:09.572054 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.572128 kubelet[2528]: E0302 13:09:09.572069 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.572449 kubelet[2528]: E0302 13:09:09.572410 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.572449 kubelet[2528]: W0302 13:09:09.572441 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.572588 kubelet[2528]: E0302 13:09:09.572454 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.574074 kubelet[2528]: E0302 13:09:09.574038 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.574195 kubelet[2528]: W0302 13:09:09.574077 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.574195 kubelet[2528]: E0302 13:09:09.574091 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.584438 kubelet[2528]: E0302 13:09:09.584352 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.584438 kubelet[2528]: W0302 13:09:09.584385 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.584438 kubelet[2528]: E0302 13:09:09.584404 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.586008 kubelet[2528]: E0302 13:09:09.585968 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.586078 kubelet[2528]: W0302 13:09:09.586009 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.586078 kubelet[2528]: E0302 13:09:09.586036 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.586389 kubelet[2528]: E0302 13:09:09.586356 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.586431 kubelet[2528]: W0302 13:09:09.586390 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.586431 kubelet[2528]: E0302 13:09:09.586407 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.587254 kubelet[2528]: E0302 13:09:09.587220 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.587254 kubelet[2528]: W0302 13:09:09.587246 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.587372 kubelet[2528]: E0302 13:09:09.587258 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.587746 kubelet[2528]: E0302 13:09:09.587720 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.587746 kubelet[2528]: W0302 13:09:09.587742 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.587828 kubelet[2528]: E0302 13:09:09.587752 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.588172 kubelet[2528]: E0302 13:09:09.588145 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.588172 kubelet[2528]: W0302 13:09:09.588165 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.588172 kubelet[2528]: E0302 13:09:09.588175 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.588494 kubelet[2528]: E0302 13:09:09.588476 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.588494 kubelet[2528]: W0302 13:09:09.588487 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.588494 kubelet[2528]: E0302 13:09:09.588496 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.593003 kubelet[2528]: E0302 13:09:09.592964 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.593003 kubelet[2528]: W0302 13:09:09.592999 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.593110 kubelet[2528]: E0302 13:09:09.593020 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.593849 kubelet[2528]: E0302 13:09:09.593820 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.593849 kubelet[2528]: W0302 13:09:09.593838 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.594113 kubelet[2528]: E0302 13:09:09.593885 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.661498 kubelet[2528]: E0302 13:09:09.661368 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.661498 kubelet[2528]: W0302 13:09:09.661398 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.661498 kubelet[2528]: E0302 13:09:09.661418 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.661837 kubelet[2528]: E0302 13:09:09.661813 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.661837 kubelet[2528]: W0302 13:09:09.661834 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.661936 kubelet[2528]: E0302 13:09:09.661848 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.662224 kubelet[2528]: E0302 13:09:09.662202 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.662224 kubelet[2528]: W0302 13:09:09.662222 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.662327 kubelet[2528]: E0302 13:09:09.662232 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.662800 kubelet[2528]: E0302 13:09:09.662727 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.662800 kubelet[2528]: W0302 13:09:09.662764 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.662800 kubelet[2528]: E0302 13:09:09.662785 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.663344 kubelet[2528]: E0302 13:09:09.663320 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.663417 kubelet[2528]: W0302 13:09:09.663387 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.663417 kubelet[2528]: E0302 13:09:09.663405 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.663808 kubelet[2528]: E0302 13:09:09.663780 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.663905 kubelet[2528]: W0302 13:09:09.663810 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.663905 kubelet[2528]: E0302 13:09:09.663825 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.664206 kubelet[2528]: E0302 13:09:09.664156 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.664206 kubelet[2528]: W0302 13:09:09.664180 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.664206 kubelet[2528]: E0302 13:09:09.664190 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.664513 kubelet[2528]: E0302 13:09:09.664461 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.664513 kubelet[2528]: W0302 13:09:09.664472 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.664513 kubelet[2528]: E0302 13:09:09.664483 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.664926 kubelet[2528]: E0302 13:09:09.664897 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.664968 kubelet[2528]: W0302 13:09:09.664928 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.664968 kubelet[2528]: E0302 13:09:09.664943 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.665275 kubelet[2528]: E0302 13:09:09.665256 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.665378 kubelet[2528]: W0302 13:09:09.665276 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.665378 kubelet[2528]: E0302 13:09:09.665290 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.665797 kubelet[2528]: E0302 13:09:09.665733 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.665797 kubelet[2528]: W0302 13:09:09.665773 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.665797 kubelet[2528]: E0302 13:09:09.665794 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.666281 kubelet[2528]: E0302 13:09:09.666223 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.666281 kubelet[2528]: W0302 13:09:09.666261 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.666693 kubelet[2528]: E0302 13:09:09.666291 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.666932 kubelet[2528]: E0302 13:09:09.666845 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.666932 kubelet[2528]: W0302 13:09:09.666899 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.666932 kubelet[2528]: E0302 13:09:09.666911 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.667265 kubelet[2528]: E0302 13:09:09.667216 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.667265 kubelet[2528]: W0302 13:09:09.667241 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.667265 kubelet[2528]: E0302 13:09:09.667250 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.667548 kubelet[2528]: E0302 13:09:09.667518 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.667548 kubelet[2528]: W0302 13:09:09.667537 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.667548 kubelet[2528]: E0302 13:09:09.667545 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.667919 kubelet[2528]: E0302 13:09:09.667885 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.667919 kubelet[2528]: W0302 13:09:09.667905 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.667919 kubelet[2528]: E0302 13:09:09.667915 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.668196 kubelet[2528]: E0302 13:09:09.668164 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.668196 kubelet[2528]: W0302 13:09:09.668182 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.668196 kubelet[2528]: E0302 13:09:09.668191 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.668453 kubelet[2528]: E0302 13:09:09.668422 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.668453 kubelet[2528]: W0302 13:09:09.668441 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.668453 kubelet[2528]: E0302 13:09:09.668449 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.668777 kubelet[2528]: E0302 13:09:09.668744 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.668777 kubelet[2528]: W0302 13:09:09.668764 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.668777 kubelet[2528]: E0302 13:09:09.668772 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.669080 kubelet[2528]: E0302 13:09:09.669049 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.669080 kubelet[2528]: W0302 13:09:09.669067 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.669080 kubelet[2528]: E0302 13:09:09.669075 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.669369 kubelet[2528]: E0302 13:09:09.669341 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.669369 kubelet[2528]: W0302 13:09:09.669360 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.669369 kubelet[2528]: E0302 13:09:09.669368 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.669700 kubelet[2528]: E0302 13:09:09.669670 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.669700 kubelet[2528]: W0302 13:09:09.669690 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.669700 kubelet[2528]: E0302 13:09:09.669699 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.670075 kubelet[2528]: E0302 13:09:09.670050 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.670075 kubelet[2528]: W0302 13:09:09.670070 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.670127 kubelet[2528]: E0302 13:09:09.670079 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.670782 kubelet[2528]: E0302 13:09:09.670719 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.670782 kubelet[2528]: W0302 13:09:09.670756 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.670782 kubelet[2528]: E0302 13:09:09.670772 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.671279 kubelet[2528]: E0302 13:09:09.671216 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.671279 kubelet[2528]: W0302 13:09:09.671251 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.671279 kubelet[2528]: E0302 13:09:09.671266 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.679136 kubelet[2528]: E0302 13:09:09.679122 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:09.679136 kubelet[2528]: W0302 13:09:09.679134 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:09.679216 kubelet[2528]: E0302 13:09:09.679146 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:09.684363 kubelet[2528]: E0302 13:09:09.684304 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:09.685255 containerd[1461]: time="2026-03-02T13:09:09.685122196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-759db96b68-4dkgf,Uid:efe7e156-172f-408f-8619-ad53acbb95d1,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:09.702361 containerd[1461]: time="2026-03-02T13:09:09.702275059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gdvv9,Uid:56b738a4-5aaf-43be-80f7-22815f249c98,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:09.726708 containerd[1461]: time="2026-03-02T13:09:09.725142628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:09.726708 containerd[1461]: time="2026-03-02T13:09:09.725194235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:09.726708 containerd[1461]: time="2026-03-02T13:09:09.725207860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:09.726708 containerd[1461]: time="2026-03-02T13:09:09.725917353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:09.753880 systemd[1]: Started cri-containerd-91505800de0ccdb63606fb6f7849a77c023af2bdf150604a744999eb4b3a5574.scope - libcontainer container 91505800de0ccdb63606fb6f7849a77c023af2bdf150604a744999eb4b3a5574. Mar 2 13:09:09.755666 containerd[1461]: time="2026-03-02T13:09:09.755057003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:09.755666 containerd[1461]: time="2026-03-02T13:09:09.755168841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:09.755666 containerd[1461]: time="2026-03-02T13:09:09.755317309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:09.755666 containerd[1461]: time="2026-03-02T13:09:09.755481154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:09.788348 systemd[1]: Started cri-containerd-f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff.scope - libcontainer container f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff. Mar 2 13:09:09.814157 containerd[1461]: time="2026-03-02T13:09:09.814043203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-759db96b68-4dkgf,Uid:efe7e156-172f-408f-8619-ad53acbb95d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"91505800de0ccdb63606fb6f7849a77c023af2bdf150604a744999eb4b3a5574\"" Mar 2 13:09:09.816154 kubelet[2528]: E0302 13:09:09.815944 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:09.818207 containerd[1461]: time="2026-03-02T13:09:09.818105387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 13:09:09.823736 containerd[1461]: time="2026-03-02T13:09:09.823647985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gdvv9,Uid:56b738a4-5aaf-43be-80f7-22815f249c98,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\"" Mar 2 13:09:10.930284 kubelet[2528]: E0302 13:09:10.930052 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:11.181203 containerd[1461]: time="2026-03-02T13:09:11.181046119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:11.182123 containerd[1461]: time="2026-03-02T13:09:11.182079428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=36094696" Mar 2 13:09:11.183685 containerd[1461]: time="2026-03-02T13:09:11.183635471Z" level=info msg="ImageCreate event name:\"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:11.186383 containerd[1461]: time="2026-03-02T13:09:11.186344608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:11.187487 containerd[1461]: time="2026-03-02T13:09:11.187380898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"36094550\" in 1.369238481s" Mar 2 13:09:11.187487 containerd[1461]: time="2026-03-02T13:09:11.187471968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\"" Mar 2 13:09:11.188754 containerd[1461]: time="2026-03-02T13:09:11.188690530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 13:09:11.203365 containerd[1461]: time="2026-03-02T13:09:11.203318202Z" level=info msg="CreateContainer within sandbox \"91505800de0ccdb63606fb6f7849a77c023af2bdf150604a744999eb4b3a5574\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 13:09:11.226122 containerd[1461]: time="2026-03-02T13:09:11.225926884Z" level=info msg="CreateContainer within sandbox \"91505800de0ccdb63606fb6f7849a77c023af2bdf150604a744999eb4b3a5574\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9a3f5e80561a170a7fa7871858de5a313439f6ff5963d952380ae83a576f017f\"" Mar 2 13:09:11.228548 containerd[1461]: time="2026-03-02T13:09:11.227265669Z" level=info msg="StartContainer for \"9a3f5e80561a170a7fa7871858de5a313439f6ff5963d952380ae83a576f017f\"" Mar 2 13:09:11.288789 systemd[1]: Started cri-containerd-9a3f5e80561a170a7fa7871858de5a313439f6ff5963d952380ae83a576f017f.scope - libcontainer container 9a3f5e80561a170a7fa7871858de5a313439f6ff5963d952380ae83a576f017f. Mar 2 13:09:11.347470 containerd[1461]: time="2026-03-02T13:09:11.347365775Z" level=info msg="StartContainer for \"9a3f5e80561a170a7fa7871858de5a313439f6ff5963d952380ae83a576f017f\" returns successfully" Mar 2 13:09:11.540124 kubelet[2528]: E0302 13:09:11.540061 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:11.569967 kubelet[2528]: E0302 13:09:11.569470 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.569967 kubelet[2528]: W0302 13:09:11.569493 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.569967 kubelet[2528]: E0302 13:09:11.569515 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.574471 kubelet[2528]: E0302 13:09:11.574378 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.574471 kubelet[2528]: W0302 13:09:11.574397 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.574471 kubelet[2528]: E0302 13:09:11.574416 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.578309 kubelet[2528]: E0302 13:09:11.577945 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.578309 kubelet[2528]: W0302 13:09:11.577967 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.578309 kubelet[2528]: E0302 13:09:11.577990 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.579798 kubelet[2528]: E0302 13:09:11.579500 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.579798 kubelet[2528]: W0302 13:09:11.579514 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.579798 kubelet[2528]: E0302 13:09:11.579531 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.587839 kubelet[2528]: E0302 13:09:11.587683 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.598821 kubelet[2528]: W0302 13:09:11.595273 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.598821 kubelet[2528]: E0302 13:09:11.596691 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.610674 kubelet[2528]: E0302 13:09:11.607461 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.610674 kubelet[2528]: W0302 13:09:11.607482 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.610674 kubelet[2528]: E0302 13:09:11.607784 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.610674 kubelet[2528]: E0302 13:09:11.608362 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.610674 kubelet[2528]: W0302 13:09:11.608373 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.610674 kubelet[2528]: E0302 13:09:11.608389 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.610674 kubelet[2528]: E0302 13:09:11.608760 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.610674 kubelet[2528]: W0302 13:09:11.608774 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.610674 kubelet[2528]: E0302 13:09:11.608789 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.611088 kubelet[2528]: E0302 13:09:11.610951 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.611088 kubelet[2528]: W0302 13:09:11.610970 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.611088 kubelet[2528]: E0302 13:09:11.610984 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.611438 kubelet[2528]: E0302 13:09:11.611380 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.611438 kubelet[2528]: W0302 13:09:11.611431 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.611508 kubelet[2528]: E0302 13:09:11.611457 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.611883 kubelet[2528]: E0302 13:09:11.611823 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.611883 kubelet[2528]: W0302 13:09:11.611882 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.611962 kubelet[2528]: E0302 13:09:11.611898 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.614792 kubelet[2528]: E0302 13:09:11.613395 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.614792 kubelet[2528]: W0302 13:09:11.613412 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.614792 kubelet[2528]: E0302 13:09:11.613425 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.614792 kubelet[2528]: E0302 13:09:11.613889 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.614792 kubelet[2528]: W0302 13:09:11.613899 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.614792 kubelet[2528]: E0302 13:09:11.613911 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.614792 kubelet[2528]: E0302 13:09:11.614119 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.614792 kubelet[2528]: W0302 13:09:11.614126 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.614792 kubelet[2528]: E0302 13:09:11.614134 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.614792 kubelet[2528]: E0302 13:09:11.614362 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.615064 kubelet[2528]: W0302 13:09:11.614372 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.615064 kubelet[2528]: E0302 13:09:11.614381 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.615156 kubelet[2528]: E0302 13:09:11.615130 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.615156 kubelet[2528]: W0302 13:09:11.615145 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.615156 kubelet[2528]: E0302 13:09:11.615155 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.615746 kubelet[2528]: E0302 13:09:11.615565 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.615746 kubelet[2528]: W0302 13:09:11.615638 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.615746 kubelet[2528]: E0302 13:09:11.615650 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.616330 kubelet[2528]: E0302 13:09:11.616268 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.616330 kubelet[2528]: W0302 13:09:11.616305 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.616330 kubelet[2528]: E0302 13:09:11.616316 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.619282 kubelet[2528]: E0302 13:09:11.617594 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.619282 kubelet[2528]: W0302 13:09:11.617654 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.619282 kubelet[2528]: E0302 13:09:11.617665 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.619282 kubelet[2528]: E0302 13:09:11.618014 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.619282 kubelet[2528]: W0302 13:09:11.618023 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.619282 kubelet[2528]: E0302 13:09:11.618033 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.629760 kubelet[2528]: E0302 13:09:11.627995 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.629760 kubelet[2528]: W0302 13:09:11.628036 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.629760 kubelet[2528]: E0302 13:09:11.628087 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.637949 kubelet[2528]: E0302 13:09:11.636312 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.637949 kubelet[2528]: W0302 13:09:11.636335 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.637949 kubelet[2528]: E0302 13:09:11.636355 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.637949 kubelet[2528]: E0302 13:09:11.636788 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.637949 kubelet[2528]: W0302 13:09:11.636799 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.637949 kubelet[2528]: E0302 13:09:11.636815 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.637949 kubelet[2528]: E0302 13:09:11.637675 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.637949 kubelet[2528]: W0302 13:09:11.637686 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.637949 kubelet[2528]: E0302 13:09:11.637696 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.638194 kubelet[2528]: E0302 13:09:11.638149 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.638194 kubelet[2528]: W0302 13:09:11.638159 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.638194 kubelet[2528]: E0302 13:09:11.638169 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.642587 kubelet[2528]: E0302 13:09:11.642402 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.642587 kubelet[2528]: W0302 13:09:11.642415 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.642587 kubelet[2528]: E0302 13:09:11.642429 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.645042 kubelet[2528]: E0302 13:09:11.644997 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.645042 kubelet[2528]: W0302 13:09:11.645026 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.645042 kubelet[2528]: E0302 13:09:11.645037 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.646125 kubelet[2528]: E0302 13:09:11.645433 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.646125 kubelet[2528]: W0302 13:09:11.645447 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.646125 kubelet[2528]: E0302 13:09:11.645457 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.646125 kubelet[2528]: E0302 13:09:11.646100 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.646125 kubelet[2528]: W0302 13:09:11.646109 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.646125 kubelet[2528]: E0302 13:09:11.646118 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.649112 kubelet[2528]: E0302 13:09:11.649071 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.649112 kubelet[2528]: W0302 13:09:11.649100 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.649112 kubelet[2528]: E0302 13:09:11.649111 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.651424 kubelet[2528]: E0302 13:09:11.649592 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.651424 kubelet[2528]: W0302 13:09:11.649646 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.651424 kubelet[2528]: E0302 13:09:11.649657 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.652000 kubelet[2528]: E0302 13:09:11.651957 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.652000 kubelet[2528]: W0302 13:09:11.651991 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.652000 kubelet[2528]: E0302 13:09:11.652003 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.652453 kubelet[2528]: E0302 13:09:11.652412 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:09:11.652453 kubelet[2528]: W0302 13:09:11.652443 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:09:11.652453 kubelet[2528]: E0302 13:09:11.652453 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:09:11.966242 containerd[1461]: time="2026-03-02T13:09:11.966101427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:11.967079 containerd[1461]: time="2026-03-02T13:09:11.967031176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4630152" Mar 2 13:09:11.968687 containerd[1461]: time="2026-03-02T13:09:11.968573387Z" level=info msg="ImageCreate event name:\"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:11.971382 containerd[1461]: time="2026-03-02T13:09:11.971271595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:11.972275 containerd[1461]: time="2026-03-02T13:09:11.972224072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"6186157\" in 783.4848ms" Mar 2 13:09:11.972275 containerd[1461]: time="2026-03-02T13:09:11.972271430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\"" Mar 2 13:09:11.977292 containerd[1461]: time="2026-03-02T13:09:11.977242241Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 13:09:11.993351 containerd[1461]: time="2026-03-02T13:09:11.993289589Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781\"" Mar 2 13:09:11.993950 containerd[1461]: time="2026-03-02T13:09:11.993900348Z" level=info msg="StartContainer for \"13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781\"" Mar 2 13:09:12.057007 systemd[1]: Started cri-containerd-13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781.scope - libcontainer container 13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781. Mar 2 13:09:12.095965 containerd[1461]: time="2026-03-02T13:09:12.095895576Z" level=info msg="StartContainer for \"13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781\" returns successfully" Mar 2 13:09:12.110741 systemd[1]: cri-containerd-13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781.scope: Deactivated successfully. Mar 2 13:09:12.252321 containerd[1461]: time="2026-03-02T13:09:12.252216014Z" level=info msg="shim disconnected" id=13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781 namespace=k8s.io Mar 2 13:09:12.252321 containerd[1461]: time="2026-03-02T13:09:12.252323765Z" level=warning msg="cleaning up after shim disconnected" id=13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781 namespace=k8s.io Mar 2 13:09:12.252844 containerd[1461]: time="2026-03-02T13:09:12.252333333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:09:12.545331 kubelet[2528]: I0302 13:09:12.544943 2528 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:09:12.546367 kubelet[2528]: E0302 13:09:12.546307 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:12.549820 containerd[1461]: time="2026-03-02T13:09:12.549684781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 13:09:12.565291 kubelet[2528]: I0302 13:09:12.565054 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-759db96b68-4dkgf" podStartSLOduration=2.19432643 podStartE2EDuration="3.565042673s" podCreationTimestamp="2026-03-02 13:09:09 +0000 UTC" firstStartedPulling="2026-03-02 13:09:09.817748432 +0000 UTC m=+19.075489789" lastFinishedPulling="2026-03-02 13:09:11.188464674 +0000 UTC m=+20.446206032" observedRunningTime="2026-03-02 13:09:11.633986004 +0000 UTC m=+20.891727393" watchObservedRunningTime="2026-03-02 13:09:12.565042673 +0000 UTC m=+21.822784031" Mar 2 13:09:12.574423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a2abc753742965cac0446538dc815f5c469190ba8bd9f55b60e7256f85d781-rootfs.mount: Deactivated successfully. Mar 2 13:09:12.913977 kubelet[2528]: E0302 13:09:12.913823 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:14.913389 kubelet[2528]: E0302 13:09:14.913340 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:16.918943 kubelet[2528]: E0302 13:09:16.915789 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:17.246715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105957648.mount: Deactivated successfully. Mar 2 13:09:17.457987 containerd[1461]: time="2026-03-02T13:09:17.457911881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:17.458752 containerd[1461]: time="2026-03-02T13:09:17.458697740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=159483365" Mar 2 13:09:17.460308 containerd[1461]: time="2026-03-02T13:09:17.460220844Z" level=info msg="ImageCreate event name:\"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:17.462640 containerd[1461]: time="2026-03-02T13:09:17.462571925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:17.463474 containerd[1461]: time="2026-03-02T13:09:17.463397067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"159483227\" in 4.913653416s" Mar 2 13:09:17.463474 containerd[1461]: time="2026-03-02T13:09:17.463443594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\"" Mar 2 13:09:17.479842 containerd[1461]: time="2026-03-02T13:09:17.479767383Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 13:09:17.533205 containerd[1461]: time="2026-03-02T13:09:17.533099928Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0\"" Mar 2 13:09:17.535550 containerd[1461]: time="2026-03-02T13:09:17.533971059Z" level=info msg="StartContainer for \"7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0\"" Mar 2 13:09:17.596819 systemd[1]: Started cri-containerd-7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0.scope - libcontainer container 7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0. Mar 2 13:09:17.649579 containerd[1461]: time="2026-03-02T13:09:17.649521219Z" level=info msg="StartContainer for \"7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0\" returns successfully" Mar 2 13:09:17.736531 systemd[1]: cri-containerd-7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0.scope: Deactivated successfully. Mar 2 13:09:17.972763 containerd[1461]: time="2026-03-02T13:09:17.969796798Z" level=info msg="shim disconnected" id=7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0 namespace=k8s.io Mar 2 13:09:17.972763 containerd[1461]: time="2026-03-02T13:09:17.972169675Z" level=warning msg="cleaning up after shim disconnected" id=7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0 namespace=k8s.io Mar 2 13:09:17.972763 containerd[1461]: time="2026-03-02T13:09:17.972287154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:09:18.251724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f1024235acd577f8daf1cc7fbdc5d66b9cb442736819e732ba83a9a52b3d3f0-rootfs.mount: Deactivated successfully. Mar 2 13:09:18.568194 containerd[1461]: time="2026-03-02T13:09:18.566491951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 13:09:18.914763 kubelet[2528]: E0302 13:09:18.914504 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:20.940461 kubelet[2528]: E0302 13:09:20.940390 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:21.578659 containerd[1461]: time="2026-03-02T13:09:21.578502075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:21.579495 containerd[1461]: time="2026-03-02T13:09:21.579445415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=70584418" Mar 2 13:09:21.580469 containerd[1461]: time="2026-03-02T13:09:21.580416988Z" level=info msg="ImageCreate event name:\"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:21.583453 containerd[1461]: time="2026-03-02T13:09:21.583375662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:21.584200 containerd[1461]: time="2026-03-02T13:09:21.584164532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"72140463\" in 3.017626205s" Mar 2 13:09:21.584253 containerd[1461]: time="2026-03-02T13:09:21.584202954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\"" Mar 2 13:09:21.590000 containerd[1461]: time="2026-03-02T13:09:21.589957683Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 13:09:21.687730 containerd[1461]: time="2026-03-02T13:09:21.687505235Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1\"" Mar 2 13:09:21.691416 containerd[1461]: time="2026-03-02T13:09:21.691029529Z" level=info msg="StartContainer for \"139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1\"" Mar 2 13:09:21.784764 systemd[1]: Started cri-containerd-139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1.scope - libcontainer container 139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1. Mar 2 13:09:21.844217 containerd[1461]: time="2026-03-02T13:09:21.844042418Z" level=info msg="StartContainer for \"139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1\" returns successfully" Mar 2 13:09:22.914004 kubelet[2528]: E0302 13:09:22.913902 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpl5p" podUID="c237a7a5-5d24-43e5-a73e-bee8dafc330d" Mar 2 13:09:22.965726 systemd[1]: cri-containerd-139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1.scope: Deactivated successfully. Mar 2 13:09:22.966409 systemd[1]: cri-containerd-139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1.scope: Consumed 1.304s CPU time. Mar 2 13:09:23.009142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1-rootfs.mount: Deactivated successfully. Mar 2 13:09:23.013908 containerd[1461]: time="2026-03-02T13:09:23.013111195Z" level=info msg="shim disconnected" id=139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1 namespace=k8s.io Mar 2 13:09:23.013908 containerd[1461]: time="2026-03-02T13:09:23.013304977Z" level=warning msg="cleaning up after shim disconnected" id=139b22c1408bc47573bad5669ca6732389b65c5d343d402150927abdd1ed45d1 namespace=k8s.io Mar 2 13:09:23.013908 containerd[1461]: time="2026-03-02T13:09:23.013318001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:09:23.014396 kubelet[2528]: I0302 13:09:23.013404 2528 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 2 13:09:23.100663 systemd[1]: Created slice kubepods-burstable-pod602aa150_36ee_49fc_9e50_ac7469877eb8.slice - libcontainer container kubepods-burstable-pod602aa150_36ee_49fc_9e50_ac7469877eb8.slice. Mar 2 13:09:23.112006 systemd[1]: Created slice kubepods-besteffort-podd8f412a5_1446_4c06_b684_ffc308baee40.slice - libcontainer container kubepods-besteffort-podd8f412a5_1446_4c06_b684_ffc308baee40.slice. Mar 2 13:09:23.150447 systemd[1]: Created slice kubepods-besteffort-pod4e957ef7_0903_4b10_8258_cb24b1bc5ba6.slice - libcontainer container kubepods-besteffort-pod4e957ef7_0903_4b10_8258_cb24b1bc5ba6.slice. Mar 2 13:09:23.169193 systemd[1]: Created slice kubepods-burstable-pod31943970_ce49_40d1_8211_0cc4a4d2c4a7.slice - libcontainer container kubepods-burstable-pod31943970_ce49_40d1_8211_0cc4a4d2c4a7.slice. Mar 2 13:09:23.178409 systemd[1]: Created slice kubepods-besteffort-pod344f0efe_0720_4de3_b1fe_8bb01ab80e58.slice - libcontainer container kubepods-besteffort-pod344f0efe_0720_4de3_b1fe_8bb01ab80e58.slice. Mar 2 13:09:23.189664 systemd[1]: Created slice kubepods-besteffort-pod95285a45_8e5b_4b4e_8b93_d4b8919773db.slice - libcontainer container kubepods-besteffort-pod95285a45_8e5b_4b4e_8b93_d4b8919773db.slice. Mar 2 13:09:23.195932 systemd[1]: Created slice kubepods-besteffort-pod0398d338_e506_4e7c_91f8_28e85a11fc76.slice - libcontainer container kubepods-besteffort-pod0398d338_e506_4e7c_91f8_28e85a11fc76.slice. Mar 2 13:09:23.263388 kubelet[2528]: I0302 13:09:23.263286 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e957ef7-0903-4b10-8258-cb24b1bc5ba6-goldmane-ca-bundle\") pod \"goldmane-7d7658d587-p92rw\" (UID: \"4e957ef7-0903-4b10-8258-cb24b1bc5ba6\") " pod="calico-system/goldmane-7d7658d587-p92rw" Mar 2 13:09:23.263388 kubelet[2528]: I0302 13:09:23.263349 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9brc\" (UniqueName: \"kubernetes.io/projected/4e957ef7-0903-4b10-8258-cb24b1bc5ba6-kube-api-access-p9brc\") pod \"goldmane-7d7658d587-p92rw\" (UID: \"4e957ef7-0903-4b10-8258-cb24b1bc5ba6\") " pod="calico-system/goldmane-7d7658d587-p92rw" Mar 2 13:09:23.263388 kubelet[2528]: I0302 13:09:23.263391 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6hz\" (UniqueName: \"kubernetes.io/projected/0398d338-e506-4e7c-91f8-28e85a11fc76-kube-api-access-nn6hz\") pod \"calico-apiserver-5db5ff8f5c-zh52r\" (UID: \"0398d338-e506-4e7c-91f8-28e85a11fc76\") " pod="calico-system/calico-apiserver-5db5ff8f5c-zh52r" Mar 2 13:09:23.263723 kubelet[2528]: I0302 13:09:23.263416 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxkf\" (UniqueName: \"kubernetes.io/projected/31943970-ce49-40d1-8211-0cc4a4d2c4a7-kube-api-access-llxkf\") pod \"coredns-7d764666f9-pdthq\" (UID: \"31943970-ce49-40d1-8211-0cc4a4d2c4a7\") " pod="kube-system/coredns-7d764666f9-pdthq" Mar 2 13:09:23.263723 kubelet[2528]: I0302 13:09:23.263568 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-nginx-config\") pod \"whisker-67b5486978-26wb5\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " pod="calico-system/whisker-67b5486978-26wb5" Mar 2 13:09:23.263723 kubelet[2528]: I0302 13:09:23.263698 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/344f0efe-0720-4de3-b1fe-8bb01ab80e58-tigera-ca-bundle\") pod \"calico-kube-controllers-7dfd68d47b-n9g4h\" (UID: \"344f0efe-0720-4de3-b1fe-8bb01ab80e58\") " pod="calico-system/calico-kube-controllers-7dfd68d47b-n9g4h" Mar 2 13:09:23.263908 kubelet[2528]: I0302 13:09:23.263749 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8f412a5-1446-4c06-b684-ffc308baee40-calico-apiserver-certs\") pod \"calico-apiserver-5db5ff8f5c-6clqc\" (UID: \"d8f412a5-1446-4c06-b684-ffc308baee40\") " pod="calico-system/calico-apiserver-5db5ff8f5c-6clqc" Mar 2 13:09:23.263908 kubelet[2528]: I0302 13:09:23.263769 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c48bj\" (UniqueName: \"kubernetes.io/projected/d8f412a5-1446-4c06-b684-ffc308baee40-kube-api-access-c48bj\") pod \"calico-apiserver-5db5ff8f5c-6clqc\" (UID: \"d8f412a5-1446-4c06-b684-ffc308baee40\") " pod="calico-system/calico-apiserver-5db5ff8f5c-6clqc" Mar 2 13:09:23.264316 kubelet[2528]: I0302 13:09:23.264166 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-ca-bundle\") pod \"whisker-67b5486978-26wb5\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " pod="calico-system/whisker-67b5486978-26wb5" Mar 2 13:09:23.264316 kubelet[2528]: I0302 13:09:23.264289 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcf4\" (UniqueName: \"kubernetes.io/projected/95285a45-8e5b-4b4e-8b93-d4b8919773db-kube-api-access-6tcf4\") pod \"whisker-67b5486978-26wb5\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " pod="calico-system/whisker-67b5486978-26wb5" Mar 2 13:09:23.264500 kubelet[2528]: I0302 13:09:23.264360 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-backend-key-pair\") pod \"whisker-67b5486978-26wb5\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " pod="calico-system/whisker-67b5486978-26wb5" Mar 2 13:09:23.264500 kubelet[2528]: I0302 13:09:23.264395 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zrt\" (UniqueName: \"kubernetes.io/projected/344f0efe-0720-4de3-b1fe-8bb01ab80e58-kube-api-access-x8zrt\") pod \"calico-kube-controllers-7dfd68d47b-n9g4h\" (UID: \"344f0efe-0720-4de3-b1fe-8bb01ab80e58\") " pod="calico-system/calico-kube-controllers-7dfd68d47b-n9g4h" Mar 2 13:09:23.264500 kubelet[2528]: I0302 13:09:23.264418 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/602aa150-36ee-49fc-9e50-ac7469877eb8-config-volume\") pod \"coredns-7d764666f9-nx776\" (UID: \"602aa150-36ee-49fc-9e50-ac7469877eb8\") " pod="kube-system/coredns-7d764666f9-nx776" Mar 2 13:09:23.264500 kubelet[2528]: I0302 13:09:23.264440 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31943970-ce49-40d1-8211-0cc4a4d2c4a7-config-volume\") pod \"coredns-7d764666f9-pdthq\" (UID: \"31943970-ce49-40d1-8211-0cc4a4d2c4a7\") " pod="kube-system/coredns-7d764666f9-pdthq" Mar 2 13:09:23.264500 kubelet[2528]: I0302 13:09:23.264463 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0398d338-e506-4e7c-91f8-28e85a11fc76-calico-apiserver-certs\") pod \"calico-apiserver-5db5ff8f5c-zh52r\" (UID: \"0398d338-e506-4e7c-91f8-28e85a11fc76\") " pod="calico-system/calico-apiserver-5db5ff8f5c-zh52r" Mar 2 13:09:23.264698 kubelet[2528]: I0302 13:09:23.264482 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lm7\" (UniqueName: \"kubernetes.io/projected/602aa150-36ee-49fc-9e50-ac7469877eb8-kube-api-access-76lm7\") pod \"coredns-7d764666f9-nx776\" (UID: \"602aa150-36ee-49fc-9e50-ac7469877eb8\") " pod="kube-system/coredns-7d764666f9-nx776" Mar 2 13:09:23.264698 kubelet[2528]: I0302 13:09:23.264507 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e957ef7-0903-4b10-8258-cb24b1bc5ba6-config\") pod \"goldmane-7d7658d587-p92rw\" (UID: \"4e957ef7-0903-4b10-8258-cb24b1bc5ba6\") " pod="calico-system/goldmane-7d7658d587-p92rw" Mar 2 13:09:23.264698 kubelet[2528]: I0302 13:09:23.264526 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4e957ef7-0903-4b10-8258-cb24b1bc5ba6-goldmane-key-pair\") pod \"goldmane-7d7658d587-p92rw\" (UID: \"4e957ef7-0903-4b10-8258-cb24b1bc5ba6\") " pod="calico-system/goldmane-7d7658d587-p92rw" Mar 2 13:09:23.409664 kubelet[2528]: E0302 13:09:23.409572 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:23.411434 containerd[1461]: time="2026-03-02T13:09:23.410748058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nx776,Uid:602aa150-36ee-49fc-9e50-ac7469877eb8,Namespace:kube-system,Attempt:0,}" Mar 2 13:09:23.448042 containerd[1461]: time="2026-03-02T13:09:23.447889692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-6clqc,Uid:d8f412a5-1446-4c06-b684-ffc308baee40,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:23.470355 containerd[1461]: time="2026-03-02T13:09:23.470253761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-p92rw,Uid:4e957ef7-0903-4b10-8258-cb24b1bc5ba6,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:23.480497 kubelet[2528]: E0302 13:09:23.479937 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:23.482109 containerd[1461]: time="2026-03-02T13:09:23.482043339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pdthq,Uid:31943970-ce49-40d1-8211-0cc4a4d2c4a7,Namespace:kube-system,Attempt:0,}" Mar 2 13:09:23.491204 containerd[1461]: time="2026-03-02T13:09:23.491012628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd68d47b-n9g4h,Uid:344f0efe-0720-4de3-b1fe-8bb01ab80e58,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:23.496653 containerd[1461]: time="2026-03-02T13:09:23.496580419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b5486978-26wb5,Uid:95285a45-8e5b-4b4e-8b93-d4b8919773db,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:23.503175 containerd[1461]: time="2026-03-02T13:09:23.503148016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-zh52r,Uid:0398d338-e506-4e7c-91f8-28e85a11fc76,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:23.633193 containerd[1461]: time="2026-03-02T13:09:23.633098479Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 13:09:23.680024 containerd[1461]: time="2026-03-02T13:09:23.679911538Z" level=error msg="Failed to destroy network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.683713 containerd[1461]: time="2026-03-02T13:09:23.683456783Z" level=error msg="encountered an error cleaning up failed sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.683713 containerd[1461]: time="2026-03-02T13:09:23.683536864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-6clqc,Uid:d8f412a5-1446-4c06-b684-ffc308baee40,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.691705 containerd[1461]: time="2026-03-02T13:09:23.691523094Z" level=info msg="CreateContainer within sandbox \"f6d4a46cfb19a86443aff5a5027de262685b5dcec0deafc9df258fe0f4c857ff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"555a555974a60f7934a508810176cf2010832b4eabb94bc7ec61e3e60d4c5ada\"" Mar 2 13:09:23.699747 containerd[1461]: time="2026-03-02T13:09:23.699459707Z" level=error msg="Failed to destroy network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.701046 containerd[1461]: time="2026-03-02T13:09:23.700951340Z" level=error msg="encountered an error cleaning up failed sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.702748 containerd[1461]: time="2026-03-02T13:09:23.702719350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nx776,Uid:602aa150-36ee-49fc-9e50-ac7469877eb8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.703434 kubelet[2528]: E0302 13:09:23.703319 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.703434 kubelet[2528]: E0302 13:09:23.703415 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-nx776" Mar 2 13:09:23.703763 kubelet[2528]: E0302 13:09:23.703445 2528 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-nx776" Mar 2 13:09:23.703763 kubelet[2528]: E0302 13:09:23.703522 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-nx776_kube-system(602aa150-36ee-49fc-9e50-ac7469877eb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-nx776_kube-system(602aa150-36ee-49fc-9e50-ac7469877eb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-nx776" podUID="602aa150-36ee-49fc-9e50-ac7469877eb8" Mar 2 13:09:23.703763 kubelet[2528]: E0302 13:09:23.703433 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.708348 kubelet[2528]: E0302 13:09:23.707731 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5db5ff8f5c-6clqc" Mar 2 13:09:23.709813 kubelet[2528]: E0302 13:09:23.709257 2528 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5db5ff8f5c-6clqc" Mar 2 13:09:23.712388 containerd[1461]: time="2026-03-02T13:09:23.710242920Z" level=info msg="StartContainer for \"555a555974a60f7934a508810176cf2010832b4eabb94bc7ec61e3e60d4c5ada\"" Mar 2 13:09:23.716748 kubelet[2528]: E0302 13:09:23.710817 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5db5ff8f5c-6clqc_calico-system(d8f412a5-1446-4c06-b684-ffc308baee40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5db5ff8f5c-6clqc_calico-system(d8f412a5-1446-4c06-b684-ffc308baee40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5db5ff8f5c-6clqc" podUID="d8f412a5-1446-4c06-b684-ffc308baee40" Mar 2 13:09:23.734519 containerd[1461]: time="2026-03-02T13:09:23.734478630Z" level=error msg="Failed to destroy network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.737122 containerd[1461]: time="2026-03-02T13:09:23.737048396Z" level=error msg="encountered an error cleaning up failed sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.737356 containerd[1461]: time="2026-03-02T13:09:23.737333489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-p92rw,Uid:4e957ef7-0903-4b10-8258-cb24b1bc5ba6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.737794 kubelet[2528]: E0302 13:09:23.737766 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.738180 kubelet[2528]: E0302 13:09:23.738161 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7d7658d587-p92rw" Mar 2 13:09:23.738321 kubelet[2528]: E0302 13:09:23.738301 2528 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7d7658d587-p92rw" Mar 2 13:09:23.738535 kubelet[2528]: E0302 13:09:23.738474 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7d7658d587-p92rw_calico-system(4e957ef7-0903-4b10-8258-cb24b1bc5ba6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7d7658d587-p92rw_calico-system(4e957ef7-0903-4b10-8258-cb24b1bc5ba6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7d7658d587-p92rw" podUID="4e957ef7-0903-4b10-8258-cb24b1bc5ba6" Mar 2 13:09:23.770920 systemd[1]: Started cri-containerd-555a555974a60f7934a508810176cf2010832b4eabb94bc7ec61e3e60d4c5ada.scope - libcontainer container 555a555974a60f7934a508810176cf2010832b4eabb94bc7ec61e3e60d4c5ada. Mar 2 13:09:23.782993 containerd[1461]: time="2026-03-02T13:09:23.782956683Z" level=error msg="Failed to destroy network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.783566 containerd[1461]: time="2026-03-02T13:09:23.783532929Z" level=error msg="encountered an error cleaning up failed sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.783845 containerd[1461]: time="2026-03-02T13:09:23.783818371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd68d47b-n9g4h,Uid:344f0efe-0720-4de3-b1fe-8bb01ab80e58,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.784097 containerd[1461]: time="2026-03-02T13:09:23.784076823Z" level=error msg="Failed to destroy network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.784477 kubelet[2528]: E0302 13:09:23.784446 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.784755 kubelet[2528]: E0302 13:09:23.784734 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfd68d47b-n9g4h" Mar 2 13:09:23.785957 kubelet[2528]: E0302 13:09:23.784988 2528 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfd68d47b-n9g4h" Mar 2 13:09:23.785957 kubelet[2528]: E0302 13:09:23.785743 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dfd68d47b-n9g4h_calico-system(344f0efe-0720-4de3-b1fe-8bb01ab80e58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dfd68d47b-n9g4h_calico-system(344f0efe-0720-4de3-b1fe-8bb01ab80e58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dfd68d47b-n9g4h" podUID="344f0efe-0720-4de3-b1fe-8bb01ab80e58" Mar 2 13:09:23.787241 containerd[1461]: time="2026-03-02T13:09:23.787212225Z" level=error msg="encountered an error cleaning up failed sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.787471 containerd[1461]: time="2026-03-02T13:09:23.787379418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pdthq,Uid:31943970-ce49-40d1-8211-0cc4a4d2c4a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.788118 kubelet[2528]: E0302 13:09:23.788096 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.788322 kubelet[2528]: E0302 13:09:23.788247 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pdthq" Mar 2 13:09:23.788448 kubelet[2528]: E0302 13:09:23.788430 2528 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pdthq" Mar 2 13:09:23.788720 kubelet[2528]: E0302 13:09:23.788589 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-pdthq_kube-system(31943970-ce49-40d1-8211-0cc4a4d2c4a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-pdthq_kube-system(31943970-ce49-40d1-8211-0cc4a4d2c4a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-pdthq" podUID="31943970-ce49-40d1-8211-0cc4a4d2c4a7" Mar 2 13:09:23.789263 containerd[1461]: time="2026-03-02T13:09:23.789233157Z" level=error msg="Failed to destroy network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.789934 containerd[1461]: time="2026-03-02T13:09:23.789907235Z" level=error msg="encountered an error cleaning up failed sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.790165 containerd[1461]: time="2026-03-02T13:09:23.790063176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b5486978-26wb5,Uid:95285a45-8e5b-4b4e-8b93-d4b8919773db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.790575 kubelet[2528]: E0302 13:09:23.790556 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.790790 kubelet[2528]: E0302 13:09:23.790726 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67b5486978-26wb5" Mar 2 13:09:23.790901 kubelet[2528]: E0302 13:09:23.790883 2528 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67b5486978-26wb5" Mar 2 13:09:23.791316 kubelet[2528]: E0302 13:09:23.791180 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67b5486978-26wb5_calico-system(95285a45-8e5b-4b4e-8b93-d4b8919773db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67b5486978-26wb5_calico-system(95285a45-8e5b-4b4e-8b93-d4b8919773db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67b5486978-26wb5" podUID="95285a45-8e5b-4b4e-8b93-d4b8919773db" Mar 2 13:09:23.808770 containerd[1461]: time="2026-03-02T13:09:23.808656295Z" level=error msg="Failed to destroy network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.809354 containerd[1461]: time="2026-03-02T13:09:23.809273467Z" level=error msg="encountered an error cleaning up failed sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.809419 containerd[1461]: time="2026-03-02T13:09:23.809372632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-zh52r,Uid:0398d338-e506-4e7c-91f8-28e85a11fc76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.809785 kubelet[2528]: E0302 13:09:23.809729 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:09:23.809919 kubelet[2528]: E0302 13:09:23.809811 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5db5ff8f5c-zh52r" Mar 2 13:09:23.809919 kubelet[2528]: E0302 13:09:23.809835 2528 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5db5ff8f5c-zh52r" Mar 2 13:09:23.810017 kubelet[2528]: E0302 13:09:23.809922 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5db5ff8f5c-zh52r_calico-system(0398d338-e506-4e7c-91f8-28e85a11fc76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5db5ff8f5c-zh52r_calico-system(0398d338-e506-4e7c-91f8-28e85a11fc76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5db5ff8f5c-zh52r" podUID="0398d338-e506-4e7c-91f8-28e85a11fc76" Mar 2 13:09:23.830961 containerd[1461]: time="2026-03-02T13:09:23.830886262Z" level=info msg="StartContainer for \"555a555974a60f7934a508810176cf2010832b4eabb94bc7ec61e3e60d4c5ada\" returns successfully" Mar 2 13:09:24.594772 kubelet[2528]: I0302 13:09:24.594718 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:24.596431 kubelet[2528]: I0302 13:09:24.596344 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:24.598699 kubelet[2528]: I0302 13:09:24.597759 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:24.605676 kubelet[2528]: I0302 13:09:24.605557 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:24.630720 containerd[1461]: time="2026-03-02T13:09:24.630049291Z" level=info msg="StopPodSandbox for \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\"" Mar 2 13:09:24.630720 containerd[1461]: time="2026-03-02T13:09:24.630197874Z" level=info msg="StopPodSandbox for \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\"" Mar 2 13:09:24.630720 containerd[1461]: time="2026-03-02T13:09:24.630236270Z" level=info msg="StopPodSandbox for \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\"" Mar 2 13:09:24.630720 containerd[1461]: time="2026-03-02T13:09:24.630246076Z" level=info msg="StopPodSandbox for \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\"" Mar 2 13:09:24.634310 containerd[1461]: time="2026-03-02T13:09:24.634283905Z" level=info msg="Ensure that sandbox fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d in task-service has been cleanup successfully" Mar 2 13:09:24.634380 containerd[1461]: time="2026-03-02T13:09:24.634302907Z" level=info msg="Ensure that sandbox 8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8 in task-service has been cleanup successfully" Mar 2 13:09:24.635721 containerd[1461]: time="2026-03-02T13:09:24.634575988Z" level=info msg="Ensure that sandbox 853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577 in task-service has been cleanup successfully" Mar 2 13:09:24.635721 containerd[1461]: time="2026-03-02T13:09:24.634287502Z" level=info msg="Ensure that sandbox 3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067 in task-service has been cleanup successfully" Mar 2 13:09:24.638280 kubelet[2528]: I0302 13:09:24.638224 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:24.639686 containerd[1461]: time="2026-03-02T13:09:24.639122185Z" level=info msg="StopPodSandbox for \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\"" Mar 2 13:09:24.639686 containerd[1461]: time="2026-03-02T13:09:24.639469624Z" level=info msg="Ensure that sandbox 803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2 in task-service has been cleanup successfully" Mar 2 13:09:24.642149 kubelet[2528]: I0302 13:09:24.642006 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:24.643703 containerd[1461]: time="2026-03-02T13:09:24.642685695Z" level=info msg="StopPodSandbox for \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\"" Mar 2 13:09:24.643703 containerd[1461]: time="2026-03-02T13:09:24.642946191Z" level=info msg="Ensure that sandbox e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9 in task-service has been cleanup successfully" Mar 2 13:09:24.647066 kubelet[2528]: I0302 13:09:24.646962 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:24.650934 containerd[1461]: time="2026-03-02T13:09:24.650839924Z" level=info msg="StopPodSandbox for \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\"" Mar 2 13:09:24.652033 containerd[1461]: time="2026-03-02T13:09:24.651013197Z" level=info msg="Ensure that sandbox a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965 in task-service has been cleanup successfully" Mar 2 13:09:24.856913 kubelet[2528]: I0302 13:09:24.855458 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-gdvv9" podStartSLOduration=2.088539733 podStartE2EDuration="15.855403376s" podCreationTimestamp="2026-03-02 13:09:09 +0000 UTC" firstStartedPulling="2026-03-02 13:09:09.825198613 +0000 UTC m=+19.082939971" lastFinishedPulling="2026-03-02 13:09:23.592062246 +0000 UTC m=+32.849803614" observedRunningTime="2026-03-02 13:09:24.69680291 +0000 UTC m=+33.954544269" watchObservedRunningTime="2026-03-02 13:09:24.855403376 +0000 UTC m=+34.113144733" Mar 2 13:09:24.931144 systemd[1]: Created slice kubepods-besteffort-podc237a7a5_5d24_43e5_a73e_bee8dafc330d.slice - libcontainer container kubepods-besteffort-podc237a7a5_5d24_43e5_a73e_bee8dafc330d.slice. Mar 2 13:09:24.976220 containerd[1461]: time="2026-03-02T13:09:24.976109324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpl5p,Uid:c237a7a5-5d24-43e5-a73e-bee8dafc330d,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.862 [INFO][3815] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.863 [INFO][3815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" iface="eth0" netns="/var/run/netns/cni-24572672-cd38-52ab-9989-b507099e3919" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.866 [INFO][3815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" iface="eth0" netns="/var/run/netns/cni-24572672-cd38-52ab-9989-b507099e3919" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.867 [INFO][3815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" iface="eth0" netns="/var/run/netns/cni-24572672-cd38-52ab-9989-b507099e3919" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.867 [INFO][3815] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.867 [INFO][3815] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.974 [INFO][3865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.977 [INFO][3865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:24.977 [INFO][3865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:25.003 [WARNING][3865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:25.003 [INFO][3865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:25.008 [INFO][3865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.041044 containerd[1461]: 2026-03-02 13:09:25.025 [INFO][3815] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:25.042303 containerd[1461]: time="2026-03-02T13:09:25.042263199Z" level=info msg="TearDown network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\" successfully" Mar 2 13:09:25.042432 containerd[1461]: time="2026-03-02T13:09:25.042402249Z" level=info msg="StopPodSandbox for \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\" returns successfully" Mar 2 13:09:25.048154 systemd[1]: run-netns-cni\x2d24572672\x2dcd38\x2d52ab\x2d9989\x2db507099e3919.mount: Deactivated successfully. Mar 2 13:09:25.052906 containerd[1461]: time="2026-03-02T13:09:25.052783898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd68d47b-n9g4h,Uid:344f0efe-0720-4de3-b1fe-8bb01ab80e58,Namespace:calico-system,Attempt:1,}" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.900 [INFO][3822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.901 [INFO][3822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" iface="eth0" netns="/var/run/netns/cni-812eb958-90ac-1877-0e87-a1d620a08f9b" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.903 [INFO][3822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" iface="eth0" netns="/var/run/netns/cni-812eb958-90ac-1877-0e87-a1d620a08f9b" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.905 [INFO][3822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" iface="eth0" netns="/var/run/netns/cni-812eb958-90ac-1877-0e87-a1d620a08f9b" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.905 [INFO][3822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.905 [INFO][3822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.988 [INFO][3889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:24.994 [INFO][3889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:25.008 [INFO][3889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:25.020 [WARNING][3889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:25.022 [INFO][3889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:25.045 [INFO][3889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.063941 containerd[1461]: 2026-03-02 13:09:25.051 [INFO][3822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:25.065539 containerd[1461]: time="2026-03-02T13:09:25.065453914Z" level=info msg="TearDown network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\" successfully" Mar 2 13:09:25.065539 containerd[1461]: time="2026-03-02T13:09:25.065512643Z" level=info msg="StopPodSandbox for \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\" returns successfully" Mar 2 13:09:25.071202 containerd[1461]: time="2026-03-02T13:09:25.070689585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-p92rw,Uid:4e957ef7-0903-4b10-8258-cb24b1bc5ba6,Namespace:calico-system,Attempt:1,}" Mar 2 13:09:25.073773 systemd[1]: run-netns-cni\x2d812eb958\x2d90ac\x2d1877\x2d0e87\x2da1d620a08f9b.mount: Deactivated successfully. Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.897 [INFO][3808] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.898 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" iface="eth0" netns="/var/run/netns/cni-298130a3-d731-ebdc-4264-1fc1d99be442" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.898 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" iface="eth0" netns="/var/run/netns/cni-298130a3-d731-ebdc-4264-1fc1d99be442" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.899 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" iface="eth0" netns="/var/run/netns/cni-298130a3-d731-ebdc-4264-1fc1d99be442" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.899 [INFO][3808] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.899 [INFO][3808] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.988 [INFO][3892] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:24.994 [INFO][3892] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:25.045 [INFO][3892] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:25.071 [WARNING][3892] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:25.072 [INFO][3892] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:25.089 [INFO][3892] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.113444 containerd[1461]: 2026-03-02 13:09:25.099 [INFO][3808] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:25.116147 containerd[1461]: time="2026-03-02T13:09:25.115914803Z" level=info msg="TearDown network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\" successfully" Mar 2 13:09:25.116147 containerd[1461]: time="2026-03-02T13:09:25.116019038Z" level=info msg="StopPodSandbox for \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\" returns successfully" Mar 2 13:09:25.125063 containerd[1461]: time="2026-03-02T13:09:25.124976205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-zh52r,Uid:0398d338-e506-4e7c-91f8-28e85a11fc76,Namespace:calico-system,Attempt:1,}" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:24.852 [INFO][3786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:24.854 [INFO][3786] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" iface="eth0" netns="/var/run/netns/cni-f0023ef7-3bed-2fd4-c916-3178bc7f0568" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:24.855 [INFO][3786] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" iface="eth0" netns="/var/run/netns/cni-f0023ef7-3bed-2fd4-c916-3178bc7f0568" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:24.858 [INFO][3786] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" iface="eth0" netns="/var/run/netns/cni-f0023ef7-3bed-2fd4-c916-3178bc7f0568" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:24.858 [INFO][3786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:24.860 [INFO][3786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:24.999 [INFO][3861] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:25.002 [INFO][3861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:25.092 [INFO][3861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:25.127 [WARNING][3861] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:25.127 [INFO][3861] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:25.140 [INFO][3861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.157107 containerd[1461]: 2026-03-02 13:09:25.152 [INFO][3786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:25.158133 containerd[1461]: time="2026-03-02T13:09:25.157846565Z" level=info msg="TearDown network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\" successfully" Mar 2 13:09:25.158133 containerd[1461]: time="2026-03-02T13:09:25.157925843Z" level=info msg="StopPodSandbox for \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\" returns successfully" Mar 2 13:09:25.160094 kubelet[2528]: E0302 13:09:25.160032 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:25.160943 containerd[1461]: time="2026-03-02T13:09:25.160907968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nx776,Uid:602aa150-36ee-49fc-9e50-ac7469877eb8,Namespace:kube-system,Attempt:1,}" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:24.906 [INFO][3799] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:24.906 [INFO][3799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" iface="eth0" netns="/var/run/netns/cni-b6c35c79-4ef8-af0e-bcda-21674758ac1a" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:24.907 [INFO][3799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" iface="eth0" netns="/var/run/netns/cni-b6c35c79-4ef8-af0e-bcda-21674758ac1a" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:24.907 [INFO][3799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" iface="eth0" netns="/var/run/netns/cni-b6c35c79-4ef8-af0e-bcda-21674758ac1a" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:24.907 [INFO][3799] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:24.907 [INFO][3799] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:25.068 [INFO][3901] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:25.070 [INFO][3901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:25.140 [INFO][3901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:25.148 [WARNING][3901] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:25.148 [INFO][3901] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:25.151 [INFO][3901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.171719 containerd[1461]: 2026-03-02 13:09:25.156 [INFO][3799] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:25.172975 containerd[1461]: time="2026-03-02T13:09:25.172015544Z" level=info msg="TearDown network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\" successfully" Mar 2 13:09:25.172975 containerd[1461]: time="2026-03-02T13:09:25.172136640Z" level=info msg="StopPodSandbox for \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\" returns successfully" Mar 2 13:09:25.175572 containerd[1461]: time="2026-03-02T13:09:25.175543298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-6clqc,Uid:d8f412a5-1446-4c06-b684-ffc308baee40,Namespace:calico-system,Attempt:1,}" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:24.864 [INFO][3768] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:24.865 [INFO][3768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" iface="eth0" netns="/var/run/netns/cni-df719736-0824-8f63-f0cd-e746b79c9002" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:24.865 [INFO][3768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" iface="eth0" netns="/var/run/netns/cni-df719736-0824-8f63-f0cd-e746b79c9002" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:24.865 [INFO][3768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" iface="eth0" netns="/var/run/netns/cni-df719736-0824-8f63-f0cd-e746b79c9002" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:24.865 [INFO][3768] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:24.865 [INFO][3768] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:25.073 [INFO][3864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:25.089 [INFO][3864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:25.155 [INFO][3864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:25.169 [WARNING][3864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:25.169 [INFO][3864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:25.173 [INFO][3864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.204328 containerd[1461]: 2026-03-02 13:09:25.191 [INFO][3768] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:25.206151 containerd[1461]: time="2026-03-02T13:09:25.206124609Z" level=info msg="TearDown network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\" successfully" Mar 2 13:09:25.206226 containerd[1461]: time="2026-03-02T13:09:25.206212793Z" level=info msg="StopPodSandbox for \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\" returns successfully" Mar 2 13:09:25.211398 kubelet[2528]: E0302 13:09:25.211261 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:25.213359 containerd[1461]: time="2026-03-02T13:09:25.212946953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pdthq,Uid:31943970-ce49-40d1-8211-0cc4a4d2c4a7,Namespace:kube-system,Attempt:1,}" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:24.864 [INFO][3763] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:24.869 [INFO][3763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" iface="eth0" netns="/var/run/netns/cni-ecbc87be-cc58-dcc1-fc21-47cc39165429" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:24.870 [INFO][3763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" iface="eth0" netns="/var/run/netns/cni-ecbc87be-cc58-dcc1-fc21-47cc39165429" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:24.873 [INFO][3763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" iface="eth0" netns="/var/run/netns/cni-ecbc87be-cc58-dcc1-fc21-47cc39165429" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:24.873 [INFO][3763] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:24.874 [INFO][3763] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:25.099 [INFO][3869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:25.099 [INFO][3869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:25.173 [INFO][3869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:25.200 [WARNING][3869] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:25.204 [INFO][3869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:25.210 [INFO][3869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.256185 containerd[1461]: 2026-03-02 13:09:25.240 [INFO][3763] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:25.262766 containerd[1461]: time="2026-03-02T13:09:25.262732552Z" level=info msg="TearDown network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\" successfully" Mar 2 13:09:25.262936 containerd[1461]: time="2026-03-02T13:09:25.262914963Z" level=info msg="StopPodSandbox for \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\" returns successfully" Mar 2 13:09:25.391043 kubelet[2528]: I0302 13:09:25.390230 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-backend-key-pair\") pod \"95285a45-8e5b-4b4e-8b93-d4b8919773db\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " Mar 2 13:09:25.391043 kubelet[2528]: I0302 13:09:25.390282 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-nginx-config\" (UniqueName: \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-nginx-config\") pod \"95285a45-8e5b-4b4e-8b93-d4b8919773db\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " Mar 2 13:09:25.391043 kubelet[2528]: I0302 13:09:25.390302 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-ca-bundle\") pod \"95285a45-8e5b-4b4e-8b93-d4b8919773db\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " Mar 2 13:09:25.391043 kubelet[2528]: I0302 13:09:25.390393 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/95285a45-8e5b-4b4e-8b93-d4b8919773db-kube-api-access-6tcf4\" (UniqueName: \"kubernetes.io/projected/95285a45-8e5b-4b4e-8b93-d4b8919773db-kube-api-access-6tcf4\") pod \"95285a45-8e5b-4b4e-8b93-d4b8919773db\" (UID: \"95285a45-8e5b-4b4e-8b93-d4b8919773db\") " Mar 2 13:09:25.391043 kubelet[2528]: I0302 13:09:25.390828 2528 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-nginx-config" pod "95285a45-8e5b-4b4e-8b93-d4b8919773db" (UID: "95285a45-8e5b-4b4e-8b93-d4b8919773db"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:09:25.391241 kubelet[2528]: I0302 13:09:25.391210 2528 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-ca-bundle" pod "95285a45-8e5b-4b4e-8b93-d4b8919773db" (UID: "95285a45-8e5b-4b4e-8b93-d4b8919773db"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:09:25.405945 kubelet[2528]: I0302 13:09:25.402664 2528 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-backend-key-pair" pod "95285a45-8e5b-4b4e-8b93-d4b8919773db" (UID: "95285a45-8e5b-4b4e-8b93-d4b8919773db"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:09:25.409849 kubelet[2528]: I0302 13:09:25.409731 2528 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95285a45-8e5b-4b4e-8b93-d4b8919773db-kube-api-access-6tcf4" pod "95285a45-8e5b-4b4e-8b93-d4b8919773db" (UID: "95285a45-8e5b-4b4e-8b93-d4b8919773db"). InnerVolumeSpecName "kube-api-access-6tcf4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:09:25.433143 systemd-networkd[1397]: cali4d4a920f4e0: Link UP Mar 2 13:09:25.434755 systemd-networkd[1397]: cali4d4a920f4e0: Gained carrier Mar 2 13:09:25.491545 kubelet[2528]: I0302 13:09:25.491470 2528 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6tcf4\" (UniqueName: \"kubernetes.io/projected/95285a45-8e5b-4b4e-8b93-d4b8919773db-kube-api-access-6tcf4\") on node \"localhost\" DevicePath \"\"" Mar 2 13:09:25.491545 kubelet[2528]: I0302 13:09:25.491499 2528 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 2 13:09:25.491545 kubelet[2528]: I0302 13:09:25.491510 2528 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 2 13:09:25.491545 kubelet[2528]: I0302 13:09:25.491517 2528 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95285a45-8e5b-4b4e-8b93-d4b8919773db-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.167 [ERROR][3940] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.187 [INFO][3940] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0 calico-kube-controllers-7dfd68d47b- calico-system 344f0efe-0720-4de3-b1fe-8bb01ab80e58 942 0 2026-03-02 13:09:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dfd68d47b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7dfd68d47b-n9g4h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4d4a920f4e0 [] [] }} ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.187 [INFO][3940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.308 [INFO][3981] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" HandleID="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.327 [INFO][3981] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" HandleID="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef900), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7dfd68d47b-n9g4h", "timestamp":"2026-03-02 13:09:25.308811093 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000561a20)} Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.328 [INFO][3981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.328 [INFO][3981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.328 [INFO][3981] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.335 [INFO][3981] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.349 [INFO][3981] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.362 [INFO][3981] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.369 [INFO][3981] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.376 [INFO][3981] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.376 [INFO][3981] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.378 [INFO][3981] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.382 [INFO][3981] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.408 [INFO][3981] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.409 [INFO][3981] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" host="localhost" Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.409 [INFO][3981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.500586 containerd[1461]: 2026-03-02 13:09:25.409 [INFO][3981] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" HandleID="k8s-pod-network.8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.501582 containerd[1461]: 2026-03-02 13:09:25.412 [INFO][3940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0", GenerateName:"calico-kube-controllers-7dfd68d47b-", Namespace:"calico-system", SelfLink:"", UID:"344f0efe-0720-4de3-b1fe-8bb01ab80e58", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd68d47b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7dfd68d47b-n9g4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d4a920f4e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.501582 containerd[1461]: 2026-03-02 13:09:25.412 [INFO][3940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.501582 containerd[1461]: 2026-03-02 13:09:25.413 [INFO][3940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d4a920f4e0 ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.501582 containerd[1461]: 2026-03-02 13:09:25.436 [INFO][3940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.501582 containerd[1461]: 2026-03-02 13:09:25.437 [INFO][3940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0", GenerateName:"calico-kube-controllers-7dfd68d47b-", Namespace:"calico-system", SelfLink:"", UID:"344f0efe-0720-4de3-b1fe-8bb01ab80e58", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd68d47b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b", Pod:"calico-kube-controllers-7dfd68d47b-n9g4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d4a920f4e0", MAC:"3e:88:2e:b9:0c:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.501582 containerd[1461]: 2026-03-02 13:09:25.471 [INFO][3940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b" Namespace="calico-system" Pod="calico-kube-controllers-7dfd68d47b-n9g4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:25.559370 systemd-networkd[1397]: caliac4d596e711: Link UP Mar 2 13:09:25.559675 systemd-networkd[1397]: caliac4d596e711: Gained carrier Mar 2 13:09:25.588421 containerd[1461]: time="2026-03-02T13:09:25.588189579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:25.588421 containerd[1461]: time="2026-03-02T13:09:25.588242848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:25.588421 containerd[1461]: time="2026-03-02T13:09:25.588253489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:25.588973 containerd[1461]: time="2026-03-02T13:09:25.588939368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.245 [ERROR][3964] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.271 [INFO][3964] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0 calico-apiserver-5db5ff8f5c- calico-system 0398d338-e506-4e7c-91f8-28e85a11fc76 947 0 2026-03-02 13:09:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5db5ff8f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5db5ff8f5c-zh52r eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliac4d596e711 [] [] }} ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.271 [INFO][3964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.338 [INFO][4041] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" HandleID="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.349 [INFO][4041] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" HandleID="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5db5ff8f5c-zh52r", "timestamp":"2026-03-02 13:09:25.338418736 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004afa20)} Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.349 [INFO][4041] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.411 [INFO][4041] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.412 [INFO][4041] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.437 [INFO][4041] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.498 [INFO][4041] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.513 [INFO][4041] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.518 [INFO][4041] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.524 [INFO][4041] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.524 [INFO][4041] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.527 [INFO][4041] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6 Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.538 [INFO][4041] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.546 [INFO][4041] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.547 [INFO][4041] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" host="localhost" Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.548 [INFO][4041] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.595268 containerd[1461]: 2026-03-02 13:09:25.548 [INFO][4041] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" HandleID="k8s-pod-network.3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.595830 containerd[1461]: 2026-03-02 13:09:25.553 [INFO][3964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"0398d338-e506-4e7c-91f8-28e85a11fc76", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5db5ff8f5c-zh52r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac4d596e711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.595830 containerd[1461]: 2026-03-02 13:09:25.554 [INFO][3964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.595830 containerd[1461]: 2026-03-02 13:09:25.554 [INFO][3964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac4d596e711 ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.595830 containerd[1461]: 2026-03-02 13:09:25.560 [INFO][3964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.595830 containerd[1461]: 2026-03-02 13:09:25.565 [INFO][3964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"0398d338-e506-4e7c-91f8-28e85a11fc76", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6", Pod:"calico-apiserver-5db5ff8f5c-zh52r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac4d596e711", MAC:"a2:e2:35:0c:eb:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.595830 containerd[1461]: 2026-03-02 13:09:25.587 [INFO][3964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-zh52r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:25.626111 systemd[1]: Started cri-containerd-8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b.scope - libcontainer container 8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b. Mar 2 13:09:25.655811 containerd[1461]: time="2026-03-02T13:09:25.655333065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:25.656712 containerd[1461]: time="2026-03-02T13:09:25.655731939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:25.656712 containerd[1461]: time="2026-03-02T13:09:25.655850581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:25.656712 containerd[1461]: time="2026-03-02T13:09:25.656401690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:25.672691 systemd[1]: Removed slice kubepods-besteffort-pod95285a45_8e5b_4b4e_8b93_d4b8919773db.slice - libcontainer container kubepods-besteffort-pod95285a45_8e5b_4b4e_8b93_d4b8919773db.slice. Mar 2 13:09:25.702116 systemd[1]: Started cri-containerd-3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6.scope - libcontainer container 3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6. Mar 2 13:09:25.732702 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:25.818973 systemd[1]: Created slice kubepods-besteffort-podede4b815_b379_42d3_82de_a6229391cb38.slice - libcontainer container kubepods-besteffort-podede4b815_b379_42d3_82de_a6229391cb38.slice. Mar 2 13:09:25.829495 systemd-networkd[1397]: cali91908ea8c93: Link UP Mar 2 13:09:25.832447 systemd-networkd[1397]: cali91908ea8c93: Gained carrier Mar 2 13:09:25.866534 containerd[1461]: time="2026-03-02T13:09:25.866281762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfd68d47b-n9g4h,Uid:344f0efe-0720-4de3-b1fe-8bb01ab80e58,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b\"" Mar 2 13:09:25.872373 containerd[1461]: time="2026-03-02T13:09:25.871766819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 13:09:25.871800 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.152 [ERROR][3918] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.187 [INFO][3918] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dpl5p-eth0 csi-node-driver- calico-system c237a7a5-5d24-43e5-a73e-bee8dafc330d 758 0 2026-03-02 13:09:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5d8f55657d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dpl5p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali91908ea8c93 [] [] }} ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.187 [INFO][3918] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-eth0" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.341 [INFO][3998] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" HandleID="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Workload="localhost-k8s-csi--node--driver--dpl5p-eth0" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.355 [INFO][3998] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" HandleID="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Workload="localhost-k8s-csi--node--driver--dpl5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dpl5p", "timestamp":"2026-03-02 13:09:25.341650904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e3b80)} Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.355 [INFO][3998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.548 [INFO][3998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.548 [INFO][3998] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.558 [INFO][3998] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.570 [INFO][3998] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.646 [INFO][3998] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.663 [INFO][3998] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.690 [INFO][3998] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.690 [INFO][3998] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.708 [INFO][3998] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.767 [INFO][3998] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.799 [INFO][3998] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.800 [INFO][3998] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" host="localhost" Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.805 [INFO][3998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.876174 containerd[1461]: 2026-03-02 13:09:25.805 [INFO][3998] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" HandleID="k8s-pod-network.f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Workload="localhost-k8s-csi--node--driver--dpl5p-eth0" Mar 2 13:09:25.876838 containerd[1461]: 2026-03-02 13:09:25.822 [INFO][3918] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpl5p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c237a7a5-5d24-43e5-a73e-bee8dafc330d", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dpl5p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali91908ea8c93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.876838 containerd[1461]: 2026-03-02 13:09:25.822 [INFO][3918] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-eth0" Mar 2 13:09:25.876838 containerd[1461]: 2026-03-02 13:09:25.822 [INFO][3918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91908ea8c93 ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-eth0" Mar 2 13:09:25.876838 containerd[1461]: 2026-03-02 13:09:25.833 [INFO][3918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-eth0" Mar 2 13:09:25.876838 containerd[1461]: 2026-03-02 13:09:25.834 [INFO][3918] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpl5p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c237a7a5-5d24-43e5-a73e-bee8dafc330d", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de", Pod:"csi-node-driver-dpl5p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali91908ea8c93", MAC:"2a:4b:c5:c4:75:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.876838 containerd[1461]: 2026-03-02 13:09:25.862 [INFO][3918] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de" Namespace="calico-system" Pod="csi-node-driver-dpl5p" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpl5p-eth0" Mar 2 13:09:25.899088 systemd-networkd[1397]: calic1db14a45ac: Link UP Mar 2 13:09:25.899300 systemd-networkd[1397]: calic1db14a45ac: Gained carrier Mar 2 13:09:25.926697 containerd[1461]: time="2026-03-02T13:09:25.925628657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:25.926848 containerd[1461]: time="2026-03-02T13:09:25.926562440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:25.926848 containerd[1461]: time="2026-03-02T13:09:25.926575865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:25.927681 containerd[1461]: time="2026-03-02T13:09:25.927518615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.155 [ERROR][3949] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.197 [INFO][3949] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7d7658d587--p92rw-eth0 goldmane-7d7658d587- calico-system 4e957ef7-0903-4b10-8258-cb24b1bc5ba6 945 0 2026-03-02 13:09:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7d7658d587 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7d7658d587-p92rw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic1db14a45ac [] [] }} ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.198 [INFO][3949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.334 [INFO][4013] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" HandleID="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.359 [INFO][4013] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" HandleID="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c1cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7d7658d587-p92rw", "timestamp":"2026-03-02 13:09:25.33438241 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006bc420)} Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.359 [INFO][4013] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.800 [INFO][4013] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.800 [INFO][4013] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.817 [INFO][4013] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.845 [INFO][4013] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.860 [INFO][4013] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.863 [INFO][4013] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.868 [INFO][4013] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.868 [INFO][4013] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.872 [INFO][4013] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07 Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.881 [INFO][4013] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.890 [INFO][4013] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.890 [INFO][4013] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" host="localhost" Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.890 [INFO][4013] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:25.928889 containerd[1461]: 2026-03-02 13:09:25.890 [INFO][4013] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" HandleID="k8s-pod-network.d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.929548 containerd[1461]: 2026-03-02 13:09:25.897 [INFO][3949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--p92rw-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"4e957ef7-0903-4b10-8258-cb24b1bc5ba6", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7d7658d587-p92rw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1db14a45ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.929548 containerd[1461]: 2026-03-02 13:09:25.897 [INFO][3949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.929548 containerd[1461]: 2026-03-02 13:09:25.897 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1db14a45ac ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.929548 containerd[1461]: 2026-03-02 13:09:25.899 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.929548 containerd[1461]: 2026-03-02 13:09:25.899 [INFO][3949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--p92rw-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"4e957ef7-0903-4b10-8258-cb24b1bc5ba6", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07", Pod:"goldmane-7d7658d587-p92rw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1db14a45ac", MAC:"5e:3f:0a:ab:e1:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:25.929548 containerd[1461]: 2026-03-02 13:09:25.916 [INFO][3949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07" Namespace="calico-system" Pod="goldmane-7d7658d587-p92rw" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:25.963830 systemd[1]: Started cri-containerd-f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de.scope - libcontainer container f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de. Mar 2 13:09:25.983560 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:25.997936 kubelet[2528]: I0302 13:09:25.997039 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ede4b815-b379-42d3-82de-a6229391cb38-nginx-config\") pod \"whisker-8554fc6696-qx8nl\" (UID: \"ede4b815-b379-42d3-82de-a6229391cb38\") " pod="calico-system/whisker-8554fc6696-qx8nl" Mar 2 13:09:25.997936 kubelet[2528]: I0302 13:09:25.997080 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkr8z\" (UniqueName: \"kubernetes.io/projected/ede4b815-b379-42d3-82de-a6229391cb38-kube-api-access-jkr8z\") pod \"whisker-8554fc6696-qx8nl\" (UID: \"ede4b815-b379-42d3-82de-a6229391cb38\") " pod="calico-system/whisker-8554fc6696-qx8nl" Mar 2 13:09:25.997936 kubelet[2528]: I0302 13:09:25.997104 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ede4b815-b379-42d3-82de-a6229391cb38-whisker-ca-bundle\") pod \"whisker-8554fc6696-qx8nl\" (UID: \"ede4b815-b379-42d3-82de-a6229391cb38\") " pod="calico-system/whisker-8554fc6696-qx8nl" Mar 2 13:09:25.997936 kubelet[2528]: I0302 13:09:25.997119 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ede4b815-b379-42d3-82de-a6229391cb38-whisker-backend-key-pair\") pod \"whisker-8554fc6696-qx8nl\" (UID: \"ede4b815-b379-42d3-82de-a6229391cb38\") " pod="calico-system/whisker-8554fc6696-qx8nl" Mar 2 13:09:26.031817 containerd[1461]: time="2026-03-02T13:09:26.012944071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:26.031817 containerd[1461]: time="2026-03-02T13:09:26.013014803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:26.031817 containerd[1461]: time="2026-03-02T13:09:26.013028158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.031817 containerd[1461]: time="2026-03-02T13:09:26.013107475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.033658 containerd[1461]: time="2026-03-02T13:09:26.032936965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-zh52r,Uid:0398d338-e506-4e7c-91f8-28e85a11fc76,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6\"" Mar 2 13:09:26.049748 systemd[1]: run-netns-cni\x2d298130a3\x2dd731\x2debdc\x2d4264\x2d1fc1d99be442.mount: Deactivated successfully. Mar 2 13:09:26.049886 systemd[1]: run-netns-cni\x2decbc87be\x2dcc58\x2ddcc1\x2dfc21\x2d47cc39165429.mount: Deactivated successfully. Mar 2 13:09:26.049961 systemd[1]: run-netns-cni\x2ddf719736\x2d0824\x2d8f63\x2df0cd\x2de746b79c9002.mount: Deactivated successfully. Mar 2 13:09:26.050026 systemd[1]: run-netns-cni\x2db6c35c79\x2d4ef8\x2daf0e\x2dbcda\x2d21674758ac1a.mount: Deactivated successfully. Mar 2 13:09:26.050089 systemd[1]: run-netns-cni\x2df0023ef7\x2d3bed\x2d2fd4\x2dc916\x2d3178bc7f0568.mount: Deactivated successfully. Mar 2 13:09:26.050155 systemd[1]: var-lib-kubelet-pods-95285a45\x2d8e5b\x2d4b4e\x2d8b93\x2dd4b8919773db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6tcf4.mount: Deactivated successfully. Mar 2 13:09:26.050226 systemd[1]: var-lib-kubelet-pods-95285a45\x2d8e5b\x2d4b4e\x2d8b93\x2dd4b8919773db-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 13:09:26.060307 containerd[1461]: time="2026-03-02T13:09:26.057031496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpl5p,Uid:c237a7a5-5d24-43e5-a73e-bee8dafc330d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de\"" Mar 2 13:09:26.079947 systemd-networkd[1397]: cali84c11d74a6e: Link UP Mar 2 13:09:26.103258 systemd-networkd[1397]: cali84c11d74a6e: Gained carrier Mar 2 13:09:26.129820 systemd[1]: Started cri-containerd-d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07.scope - libcontainer container d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07. Mar 2 13:09:26.164556 systemd-networkd[1397]: caliac916584804: Link UP Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.299 [ERROR][3987] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.314 [INFO][3987] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--nx776-eth0 coredns-7d764666f9- kube-system 602aa150-36ee-49fc-9e50-ac7469877eb8 941 0 2026-03-02 13:08:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-nx776 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84c11d74a6e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.314 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.400 [INFO][4056] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" HandleID="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.424 [INFO][4056] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" HandleID="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-nx776", "timestamp":"2026-03-02 13:09:25.400535997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b8580)} Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.430 [INFO][4056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.890 [INFO][4056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.890 [INFO][4056] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.916 [INFO][4056] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.943 [INFO][4056] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.959 [INFO][4056] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.963 [INFO][4056] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.971 [INFO][4056] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.972 [INFO][4056] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.975 [INFO][4056] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90 Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.984 [INFO][4056] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.996 [INFO][4056] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.998 [INFO][4056] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" host="localhost" Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.999 [INFO][4056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:26.165942 containerd[1461]: 2026-03-02 13:09:25.999 [INFO][4056] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" HandleID="k8s-pod-network.deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:26.166436 containerd[1461]: 2026-03-02 13:09:26.036 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--nx776-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"602aa150-36ee-49fc-9e50-ac7469877eb8", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-nx776", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84c11d74a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.166436 containerd[1461]: 2026-03-02 13:09:26.047 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:26.166436 containerd[1461]: 2026-03-02 13:09:26.047 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84c11d74a6e ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:26.166436 containerd[1461]: 2026-03-02 13:09:26.111 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:26.166436 containerd[1461]: 2026-03-02 13:09:26.113 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--nx776-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"602aa150-36ee-49fc-9e50-ac7469877eb8", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90", Pod:"coredns-7d764666f9-nx776", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84c11d74a6e", MAC:"36:1e:50:f0:c7:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.166436 containerd[1461]: 2026-03-02 13:09:26.150 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90" Namespace="kube-system" Pod="coredns-7d764666f9-nx776" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:26.167220 systemd-networkd[1397]: caliac916584804: Gained carrier Mar 2 13:09:26.192839 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.326 [ERROR][4008] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.349 [INFO][4008] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0 calico-apiserver-5db5ff8f5c- calico-system d8f412a5-1446-4c06-b684-ffc308baee40 946 0 2026-03-02 13:09:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5db5ff8f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5db5ff8f5c-6clqc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliac916584804 [] [] }} ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.349 [INFO][4008] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.430 [INFO][4069] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" HandleID="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.490 [INFO][4069] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" HandleID="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5db5ff8f5c-6clqc", "timestamp":"2026-03-02 13:09:25.430642222 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe2c0)} Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.490 [INFO][4069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.999 [INFO][4069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:25.999 [INFO][4069] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.030 [INFO][4069] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.060 [INFO][4069] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.075 [INFO][4069] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.079 [INFO][4069] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.086 [INFO][4069] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.086 [INFO][4069] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.094 [INFO][4069] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0 Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.107 [INFO][4069] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.150 [INFO][4069] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.151 [INFO][4069] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" host="localhost" Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.151 [INFO][4069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:26.206786 containerd[1461]: 2026-03-02 13:09:26.151 [INFO][4069] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" HandleID="k8s-pod-network.c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:26.208662 containerd[1461]: 2026-03-02 13:09:26.156 [INFO][4008] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"d8f412a5-1446-4c06-b684-ffc308baee40", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5db5ff8f5c-6clqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac916584804", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.208662 containerd[1461]: 2026-03-02 13:09:26.156 [INFO][4008] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:26.208662 containerd[1461]: 2026-03-02 13:09:26.156 [INFO][4008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac916584804 ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:26.208662 containerd[1461]: 2026-03-02 13:09:26.168 [INFO][4008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:26.208662 containerd[1461]: 2026-03-02 13:09:26.169 [INFO][4008] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"d8f412a5-1446-4c06-b684-ffc308baee40", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0", Pod:"calico-apiserver-5db5ff8f5c-6clqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac916584804", MAC:"82:4c:b1:b8:83:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.208662 containerd[1461]: 2026-03-02 13:09:26.188 [INFO][4008] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0" Namespace="calico-system" Pod="calico-apiserver-5db5ff8f5c-6clqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:26.246275 containerd[1461]: time="2026-03-02T13:09:26.242365157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:26.246275 containerd[1461]: time="2026-03-02T13:09:26.242494379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:26.246275 containerd[1461]: time="2026-03-02T13:09:26.242505920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.246275 containerd[1461]: time="2026-03-02T13:09:26.243460211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.271365 containerd[1461]: time="2026-03-02T13:09:26.271305755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-p92rw,Uid:4e957ef7-0903-4b10-8258-cb24b1bc5ba6,Namespace:calico-system,Attempt:1,} returns sandbox id \"d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07\"" Mar 2 13:09:26.289903 containerd[1461]: time="2026-03-02T13:09:26.285246805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:26.289903 containerd[1461]: time="2026-03-02T13:09:26.285447439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:26.289903 containerd[1461]: time="2026-03-02T13:09:26.285463869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.289903 containerd[1461]: time="2026-03-02T13:09:26.285815365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.295973 systemd[1]: Started cri-containerd-deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90.scope - libcontainer container deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90. Mar 2 13:09:26.315191 systemd-networkd[1397]: calib56dfe41e0d: Link UP Mar 2 13:09:26.317532 systemd-networkd[1397]: calib56dfe41e0d: Gained carrier Mar 2 13:09:26.326828 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:26.339936 systemd[1]: Started cri-containerd-c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0.scope - libcontainer container c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0. Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:25.351 [ERROR][4026] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:25.372 [INFO][4026] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--pdthq-eth0 coredns-7d764666f9- kube-system 31943970-ce49-40d1-8211-0cc4a4d2c4a7 944 0 2026-03-02 13:08:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-pdthq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib56dfe41e0d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:25.372 [INFO][4026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:25.503 [INFO][4075] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" HandleID="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:25.513 [INFO][4075] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" HandleID="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000201b70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-pdthq", "timestamp":"2026-03-02 13:09:25.503805275 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00012f600)} Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:25.513 [INFO][4075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.152 [INFO][4075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.153 [INFO][4075] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.166 [INFO][4075] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.191 [INFO][4075] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.214 [INFO][4075] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.223 [INFO][4075] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.232 [INFO][4075] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.242 [INFO][4075] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.256 [INFO][4075] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.275 [INFO][4075] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.301 [INFO][4075] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.301 [INFO][4075] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" host="localhost" Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.301 [INFO][4075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:26.354759 containerd[1461]: 2026-03-02 13:09:26.301 [INFO][4075] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" HandleID="k8s-pod-network.77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:26.355362 containerd[1461]: 2026-03-02 13:09:26.306 [INFO][4026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pdthq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"31943970-ce49-40d1-8211-0cc4a4d2c4a7", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-pdthq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib56dfe41e0d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.355362 containerd[1461]: 2026-03-02 13:09:26.307 [INFO][4026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:26.355362 containerd[1461]: 2026-03-02 13:09:26.307 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib56dfe41e0d ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:26.355362 containerd[1461]: 2026-03-02 13:09:26.320 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:26.355362 containerd[1461]: 2026-03-02 13:09:26.321 [INFO][4026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pdthq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"31943970-ce49-40d1-8211-0cc4a4d2c4a7", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda", Pod:"coredns-7d764666f9-pdthq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib56dfe41e0d", MAC:"9e:68:39:73:22:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.355362 containerd[1461]: 2026-03-02 13:09:26.348 [INFO][4026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda" Namespace="kube-system" Pod="coredns-7d764666f9-pdthq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:26.373154 containerd[1461]: time="2026-03-02T13:09:26.373070813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nx776,Uid:602aa150-36ee-49fc-9e50-ac7469877eb8,Namespace:kube-system,Attempt:1,} returns sandbox id \"deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90\"" Mar 2 13:09:26.375681 kubelet[2528]: E0302 13:09:26.375274 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:26.379081 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:26.389029 containerd[1461]: time="2026-03-02T13:09:26.388975152Z" level=info msg="CreateContainer within sandbox \"deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:09:26.397080 containerd[1461]: time="2026-03-02T13:09:26.396506302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:26.398926 containerd[1461]: time="2026-03-02T13:09:26.398002745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:26.398926 containerd[1461]: time="2026-03-02T13:09:26.398024445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.398926 containerd[1461]: time="2026-03-02T13:09:26.398196526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.440374 containerd[1461]: time="2026-03-02T13:09:26.440327652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8554fc6696-qx8nl,Uid:ede4b815-b379-42d3-82de-a6229391cb38,Namespace:calico-system,Attempt:0,}" Mar 2 13:09:26.446837 systemd[1]: Started cri-containerd-77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda.scope - libcontainer container 77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda. Mar 2 13:09:26.454418 containerd[1461]: time="2026-03-02T13:09:26.454366315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db5ff8f5c-6clqc,Uid:d8f412a5-1446-4c06-b684-ffc308baee40,Namespace:calico-system,Attempt:1,} returns sandbox id \"c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0\"" Mar 2 13:09:26.469261 containerd[1461]: time="2026-03-02T13:09:26.469146568Z" level=info msg="CreateContainer within sandbox \"deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2047136d4b4a56e7673d3f3f4b8eb96494ceb24d212494ff35d9ee86e34a818\"" Mar 2 13:09:26.472838 containerd[1461]: time="2026-03-02T13:09:26.471923506Z" level=info msg="StartContainer for \"d2047136d4b4a56e7673d3f3f4b8eb96494ceb24d212494ff35d9ee86e34a818\"" Mar 2 13:09:26.475850 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:26.547845 systemd[1]: Started cri-containerd-d2047136d4b4a56e7673d3f3f4b8eb96494ceb24d212494ff35d9ee86e34a818.scope - libcontainer container d2047136d4b4a56e7673d3f3f4b8eb96494ceb24d212494ff35d9ee86e34a818. Mar 2 13:09:26.559212 containerd[1461]: time="2026-03-02T13:09:26.559166849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pdthq,Uid:31943970-ce49-40d1-8211-0cc4a4d2c4a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda\"" Mar 2 13:09:26.560921 kubelet[2528]: E0302 13:09:26.560566 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:26.578508 containerd[1461]: time="2026-03-02T13:09:26.578375907Z" level=info msg="CreateContainer within sandbox \"77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:09:26.608783 containerd[1461]: time="2026-03-02T13:09:26.608746937Z" level=info msg="CreateContainer within sandbox \"77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"768a771cc233c1d1a9a51b79134ad8d07df311d1d3631b007b74156190f4229d\"" Mar 2 13:09:26.609728 containerd[1461]: time="2026-03-02T13:09:26.609536527Z" level=info msg="StartContainer for \"768a771cc233c1d1a9a51b79134ad8d07df311d1d3631b007b74156190f4229d\"" Mar 2 13:09:26.688945 systemd[1]: Started cri-containerd-768a771cc233c1d1a9a51b79134ad8d07df311d1d3631b007b74156190f4229d.scope - libcontainer container 768a771cc233c1d1a9a51b79134ad8d07df311d1d3631b007b74156190f4229d. Mar 2 13:09:26.709733 containerd[1461]: time="2026-03-02T13:09:26.709093143Z" level=info msg="StartContainer for \"d2047136d4b4a56e7673d3f3f4b8eb96494ceb24d212494ff35d9ee86e34a818\" returns successfully" Mar 2 13:09:26.775494 kubelet[2528]: E0302 13:09:26.774062 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:26.808573 containerd[1461]: time="2026-03-02T13:09:26.808538913Z" level=info msg="StartContainer for \"768a771cc233c1d1a9a51b79134ad8d07df311d1d3631b007b74156190f4229d\" returns successfully" Mar 2 13:09:26.818990 kubelet[2528]: I0302 13:09:26.818412 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nx776" podStartSLOduration=31.818396261 podStartE2EDuration="31.818396261s" podCreationTimestamp="2026-03-02 13:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:09:26.804443589 +0000 UTC m=+36.062185118" watchObservedRunningTime="2026-03-02 13:09:26.818396261 +0000 UTC m=+36.076137619" Mar 2 13:09:26.897703 systemd-networkd[1397]: cali39a9117addf: Link UP Mar 2 13:09:26.898021 systemd-networkd[1397]: cali39a9117addf: Gained carrier Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.598 [ERROR][4557] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.642 [INFO][4557] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8554fc6696--qx8nl-eth0 whisker-8554fc6696- calico-system ede4b815-b379-42d3-82de-a6229391cb38 976 0 2026-03-02 13:09:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8554fc6696 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8554fc6696-qx8nl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali39a9117addf [] [] }} ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.642 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.736 [INFO][4623] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" HandleID="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Workload="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.765 [INFO][4623] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" HandleID="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Workload="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00061cf20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8554fc6696-qx8nl", "timestamp":"2026-03-02 13:09:26.736184406 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006cac60)} Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.766 [INFO][4623] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.766 [INFO][4623] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.767 [INFO][4623] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.775 [INFO][4623] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.805 [INFO][4623] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.831 [INFO][4623] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.837 [INFO][4623] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.845 [INFO][4623] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.845 [INFO][4623] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.848 [INFO][4623] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.855 [INFO][4623] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.875 [INFO][4623] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.875 [INFO][4623] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" host="localhost" Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.875 [INFO][4623] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:26.925806 containerd[1461]: 2026-03-02 13:09:26.875 [INFO][4623] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" HandleID="k8s-pod-network.4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Workload="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" Mar 2 13:09:26.927003 containerd[1461]: 2026-03-02 13:09:26.883 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8554fc6696--qx8nl-eth0", GenerateName:"whisker-8554fc6696-", Namespace:"calico-system", SelfLink:"", UID:"ede4b815-b379-42d3-82de-a6229391cb38", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8554fc6696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8554fc6696-qx8nl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali39a9117addf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.927003 containerd[1461]: 2026-03-02 13:09:26.884 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" Mar 2 13:09:26.927003 containerd[1461]: 2026-03-02 13:09:26.884 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39a9117addf ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" Mar 2 13:09:26.927003 containerd[1461]: 2026-03-02 13:09:26.900 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" Mar 2 13:09:26.927003 containerd[1461]: 2026-03-02 13:09:26.901 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8554fc6696--qx8nl-eth0", GenerateName:"whisker-8554fc6696-", Namespace:"calico-system", SelfLink:"", UID:"ede4b815-b379-42d3-82de-a6229391cb38", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8554fc6696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a", Pod:"whisker-8554fc6696-qx8nl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali39a9117addf", MAC:"7e:8a:d8:08:6f:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:26.927003 containerd[1461]: 2026-03-02 13:09:26.918 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a" Namespace="calico-system" Pod="whisker-8554fc6696-qx8nl" WorkloadEndpoint="localhost-k8s-whisker--8554fc6696--qx8nl-eth0" Mar 2 13:09:26.941259 kubelet[2528]: I0302 13:09:26.941194 2528 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="95285a45-8e5b-4b4e-8b93-d4b8919773db" path="/var/lib/kubelet/pods/95285a45-8e5b-4b4e-8b93-d4b8919773db/volumes" Mar 2 13:09:26.961510 containerd[1461]: time="2026-03-02T13:09:26.958008389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:09:26.961510 containerd[1461]: time="2026-03-02T13:09:26.958068692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:09:26.961510 containerd[1461]: time="2026-03-02T13:09:26.958082007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.961510 containerd[1461]: time="2026-03-02T13:09:26.958175000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:09:26.992828 systemd[1]: Started cri-containerd-4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a.scope - libcontainer container 4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a. Mar 2 13:09:27.010895 systemd-networkd[1397]: cali91908ea8c93: Gained IPv6LL Mar 2 13:09:27.042326 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:09:27.090526 containerd[1461]: time="2026-03-02T13:09:27.090441098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8554fc6696-qx8nl,Uid:ede4b815-b379-42d3-82de-a6229391cb38,Namespace:calico-system,Attempt:0,} returns sandbox id \"4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a\"" Mar 2 13:09:27.139413 systemd-networkd[1397]: calic1db14a45ac: Gained IPv6LL Mar 2 13:09:27.201795 systemd-networkd[1397]: caliac4d596e711: Gained IPv6LL Mar 2 13:09:27.329839 systemd-networkd[1397]: cali4d4a920f4e0: Gained IPv6LL Mar 2 13:09:27.458013 systemd-networkd[1397]: cali84c11d74a6e: Gained IPv6LL Mar 2 13:09:27.714461 systemd-networkd[1397]: calib56dfe41e0d: Gained IPv6LL Mar 2 13:09:27.716186 systemd-networkd[1397]: caliac916584804: Gained IPv6LL Mar 2 13:09:27.722295 containerd[1461]: time="2026-03-02T13:09:27.722164659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:27.723509 containerd[1461]: time="2026-03-02T13:09:27.723423849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=52396348" Mar 2 13:09:27.724730 containerd[1461]: time="2026-03-02T13:09:27.724685084Z" level=info msg="ImageCreate event name:\"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:27.727711 containerd[1461]: time="2026-03-02T13:09:27.727663743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:27.728400 containerd[1461]: time="2026-03-02T13:09:27.728263568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"53952361\" in 1.855851617s" Mar 2 13:09:27.728400 containerd[1461]: time="2026-03-02T13:09:27.728324792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\"" Mar 2 13:09:27.731753 containerd[1461]: time="2026-03-02T13:09:27.730662816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:09:27.748098 containerd[1461]: time="2026-03-02T13:09:27.748050009Z" level=info msg="CreateContainer within sandbox \"8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 13:09:27.763439 containerd[1461]: time="2026-03-02T13:09:27.763333290Z" level=info msg="CreateContainer within sandbox \"8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cf7261932ca78fdea56c93f22671579b06a7e16ab6d63e81ff245bbc80460d56\"" Mar 2 13:09:27.764587 containerd[1461]: time="2026-03-02T13:09:27.764436501Z" level=info msg="StartContainer for \"cf7261932ca78fdea56c93f22671579b06a7e16ab6d63e81ff245bbc80460d56\"" Mar 2 13:09:27.806400 kubelet[2528]: E0302 13:09:27.806357 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:27.807649 systemd[1]: Started cri-containerd-cf7261932ca78fdea56c93f22671579b06a7e16ab6d63e81ff245bbc80460d56.scope - libcontainer container cf7261932ca78fdea56c93f22671579b06a7e16ab6d63e81ff245bbc80460d56. Mar 2 13:09:27.813500 kubelet[2528]: E0302 13:09:27.813382 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:27.827184 kubelet[2528]: I0302 13:09:27.827046 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pdthq" podStartSLOduration=32.827033923 podStartE2EDuration="32.827033923s" podCreationTimestamp="2026-03-02 13:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:09:27.823520075 +0000 UTC m=+37.081261433" watchObservedRunningTime="2026-03-02 13:09:27.827033923 +0000 UTC m=+37.084775281" Mar 2 13:09:27.905081 containerd[1461]: time="2026-03-02T13:09:27.905025622Z" level=info msg="StartContainer for \"cf7261932ca78fdea56c93f22671579b06a7e16ab6d63e81ff245bbc80460d56\" returns successfully" Mar 2 13:09:28.024285 kubelet[2528]: I0302 13:09:28.019027 2528 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:09:28.024285 kubelet[2528]: E0302 13:09:28.019444 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:28.290279 systemd-networkd[1397]: cali39a9117addf: Gained IPv6LL Mar 2 13:09:28.361755 kernel: calico-node[4798]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 13:09:28.819894 kubelet[2528]: E0302 13:09:28.819832 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:28.822417 kubelet[2528]: E0302 13:09:28.820402 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:28.822417 kubelet[2528]: E0302 13:09:28.820689 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:28.837444 kubelet[2528]: I0302 13:09:28.836958 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dfd68d47b-n9g4h" podStartSLOduration=17.977027398 podStartE2EDuration="19.83694312s" podCreationTimestamp="2026-03-02 13:09:09 +0000 UTC" firstStartedPulling="2026-03-02 13:09:25.870548455 +0000 UTC m=+35.128289814" lastFinishedPulling="2026-03-02 13:09:27.730464178 +0000 UTC m=+36.988205536" observedRunningTime="2026-03-02 13:09:28.833767546 +0000 UTC m=+38.091508904" watchObservedRunningTime="2026-03-02 13:09:28.83694312 +0000 UTC m=+38.094684478" Mar 2 13:09:29.049180 systemd-networkd[1397]: vxlan.calico: Link UP Mar 2 13:09:29.049191 systemd-networkd[1397]: vxlan.calico: Gained carrier Mar 2 13:09:29.574089 containerd[1461]: time="2026-03-02T13:09:29.573927357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:29.574931 containerd[1461]: time="2026-03-02T13:09:29.574830818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=48403149" Mar 2 13:09:29.576407 containerd[1461]: time="2026-03-02T13:09:29.576355704Z" level=info msg="ImageCreate event name:\"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:29.579135 containerd[1461]: time="2026-03-02T13:09:29.579042550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:29.579996 containerd[1461]: time="2026-03-02T13:09:29.579827854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 1.849137748s" Mar 2 13:09:29.579996 containerd[1461]: time="2026-03-02T13:09:29.579898006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 13:09:29.581176 containerd[1461]: time="2026-03-02T13:09:29.581122251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 13:09:29.586297 containerd[1461]: time="2026-03-02T13:09:29.586250734Z" level=info msg="CreateContainer within sandbox \"3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:09:29.604294 containerd[1461]: time="2026-03-02T13:09:29.604220055Z" level=info msg="CreateContainer within sandbox \"3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b3e19a187e2318687ab3ce767f9b8041c90a3e2091852f77e6fc18656235c53d\"" Mar 2 13:09:29.605754 containerd[1461]: time="2026-03-02T13:09:29.605665173Z" level=info msg="StartContainer for \"b3e19a187e2318687ab3ce767f9b8041c90a3e2091852f77e6fc18656235c53d\"" Mar 2 13:09:29.641985 systemd[1]: Started cri-containerd-b3e19a187e2318687ab3ce767f9b8041c90a3e2091852f77e6fc18656235c53d.scope - libcontainer container b3e19a187e2318687ab3ce767f9b8041c90a3e2091852f77e6fc18656235c53d. Mar 2 13:09:29.689500 containerd[1461]: time="2026-03-02T13:09:29.689453984Z" level=info msg="StartContainer for \"b3e19a187e2318687ab3ce767f9b8041c90a3e2091852f77e6fc18656235c53d\" returns successfully" Mar 2 13:09:29.829420 kubelet[2528]: E0302 13:09:29.829138 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:29.835635 kubelet[2528]: E0302 13:09:29.834848 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:29.865076 kubelet[2528]: I0302 13:09:29.864771 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5db5ff8f5c-zh52r" podStartSLOduration=18.321730072 podStartE2EDuration="21.864691127s" podCreationTimestamp="2026-03-02 13:09:08 +0000 UTC" firstStartedPulling="2026-03-02 13:09:26.038018229 +0000 UTC m=+35.295759597" lastFinishedPulling="2026-03-02 13:09:29.580979294 +0000 UTC m=+38.838720652" observedRunningTime="2026-03-02 13:09:29.851791795 +0000 UTC m=+39.109533153" watchObservedRunningTime="2026-03-02 13:09:29.864691127 +0000 UTC m=+39.122432485" Mar 2 13:09:30.227113 containerd[1461]: time="2026-03-02T13:09:30.226446475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:30.227937 containerd[1461]: time="2026-03-02T13:09:30.227888752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8793087" Mar 2 13:09:30.244053 containerd[1461]: time="2026-03-02T13:09:30.243791061Z" level=info msg="ImageCreate event name:\"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:30.247320 containerd[1461]: time="2026-03-02T13:09:30.247292762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:30.248050 containerd[1461]: time="2026-03-02T13:09:30.247981542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"10349132\" in 666.813767ms" Mar 2 13:09:30.248050 containerd[1461]: time="2026-03-02T13:09:30.248033580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\"" Mar 2 13:09:30.249437 containerd[1461]: time="2026-03-02T13:09:30.249418510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 13:09:30.255446 containerd[1461]: time="2026-03-02T13:09:30.255424131Z" level=info msg="CreateContainer within sandbox \"f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 13:09:30.274380 containerd[1461]: time="2026-03-02T13:09:30.274279790Z" level=info msg="CreateContainer within sandbox \"f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1820e8aa4e305420f129c1f5b9d94396319305352293d6155e05e0d92057249a\"" Mar 2 13:09:30.276736 containerd[1461]: time="2026-03-02T13:09:30.275113474Z" level=info msg="StartContainer for \"1820e8aa4e305420f129c1f5b9d94396319305352293d6155e05e0d92057249a\"" Mar 2 13:09:30.312831 systemd[1]: Started cri-containerd-1820e8aa4e305420f129c1f5b9d94396319305352293d6155e05e0d92057249a.scope - libcontainer container 1820e8aa4e305420f129c1f5b9d94396319305352293d6155e05e0d92057249a. Mar 2 13:09:30.353350 containerd[1461]: time="2026-03-02T13:09:30.353281687Z" level=info msg="StartContainer for \"1820e8aa4e305420f129c1f5b9d94396319305352293d6155e05e0d92057249a\" returns successfully" Mar 2 13:09:30.786220 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Mar 2 13:09:32.404935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819008482.mount: Deactivated successfully. Mar 2 13:09:32.779823 containerd[1461]: time="2026-03-02T13:09:32.779717665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:32.780806 containerd[1461]: time="2026-03-02T13:09:32.780755724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=55607954" Mar 2 13:09:32.782308 containerd[1461]: time="2026-03-02T13:09:32.782258399Z" level=info msg="ImageCreate event name:\"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:32.785070 containerd[1461]: time="2026-03-02T13:09:32.785024648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:32.785914 containerd[1461]: time="2026-03-02T13:09:32.785841423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"55607800\" in 2.536327695s" Mar 2 13:09:32.785961 containerd[1461]: time="2026-03-02T13:09:32.785912146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\"" Mar 2 13:09:32.788009 containerd[1461]: time="2026-03-02T13:09:32.787284937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:09:32.793285 containerd[1461]: time="2026-03-02T13:09:32.793225797Z" level=info msg="CreateContainer within sandbox \"d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 13:09:32.813140 containerd[1461]: time="2026-03-02T13:09:32.813023433Z" level=info msg="CreateContainer within sandbox \"d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6c4408cf8c19af7feb1fcc6b32681ec4f74b331e978a5e41e2628ccc56426554\"" Mar 2 13:09:32.815480 containerd[1461]: time="2026-03-02T13:09:32.815370105Z" level=info msg="StartContainer for \"6c4408cf8c19af7feb1fcc6b32681ec4f74b331e978a5e41e2628ccc56426554\"" Mar 2 13:09:32.880795 systemd[1]: Started cri-containerd-6c4408cf8c19af7feb1fcc6b32681ec4f74b331e978a5e41e2628ccc56426554.scope - libcontainer container 6c4408cf8c19af7feb1fcc6b32681ec4f74b331e978a5e41e2628ccc56426554. Mar 2 13:09:32.930557 containerd[1461]: time="2026-03-02T13:09:32.930446426Z" level=info msg="StartContainer for \"6c4408cf8c19af7feb1fcc6b32681ec4f74b331e978a5e41e2628ccc56426554\" returns successfully" Mar 2 13:09:32.971924 containerd[1461]: time="2026-03-02T13:09:32.971828785Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:32.974015 containerd[1461]: time="2026-03-02T13:09:32.973945846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 13:09:32.979122 containerd[1461]: time="2026-03-02T13:09:32.979038844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 191.724782ms" Mar 2 13:09:32.979122 containerd[1461]: time="2026-03-02T13:09:32.979084570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 13:09:32.980169 containerd[1461]: time="2026-03-02T13:09:32.980125587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 13:09:32.984395 containerd[1461]: time="2026-03-02T13:09:32.984322186Z" level=info msg="CreateContainer within sandbox \"c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:09:33.014027 containerd[1461]: time="2026-03-02T13:09:33.013932514Z" level=info msg="CreateContainer within sandbox \"c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0004243de6fa300efef11f734230b496d246a0f94f185a647d94821c142ad066\"" Mar 2 13:09:33.015196 containerd[1461]: time="2026-03-02T13:09:33.015069572Z" level=info msg="StartContainer for \"0004243de6fa300efef11f734230b496d246a0f94f185a647d94821c142ad066\"" Mar 2 13:09:33.057937 systemd[1]: Started cri-containerd-0004243de6fa300efef11f734230b496d246a0f94f185a647d94821c142ad066.scope - libcontainer container 0004243de6fa300efef11f734230b496d246a0f94f185a647d94821c142ad066. Mar 2 13:09:33.105116 containerd[1461]: time="2026-03-02T13:09:33.104975849Z" level=info msg="StartContainer for \"0004243de6fa300efef11f734230b496d246a0f94f185a647d94821c142ad066\" returns successfully" Mar 2 13:09:33.729314 containerd[1461]: time="2026-03-02T13:09:33.728895793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:33.730209 containerd[1461]: time="2026-03-02T13:09:33.729924990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=6036825" Mar 2 13:09:33.733291 containerd[1461]: time="2026-03-02T13:09:33.733251790Z" level=info msg="ImageCreate event name:\"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:33.736757 containerd[1461]: time="2026-03-02T13:09:33.736701279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:33.738155 containerd[1461]: time="2026-03-02T13:09:33.738089360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7592862\" in 757.915283ms" Mar 2 13:09:33.738155 containerd[1461]: time="2026-03-02T13:09:33.738138502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\"" Mar 2 13:09:33.740455 containerd[1461]: time="2026-03-02T13:09:33.739693906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 13:09:33.745121 containerd[1461]: time="2026-03-02T13:09:33.745035557Z" level=info msg="CreateContainer within sandbox \"4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 13:09:33.764836 containerd[1461]: time="2026-03-02T13:09:33.764713304Z" level=info msg="CreateContainer within sandbox \"4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d5b8694ddb0ef4ca1d545c6f2d66da7baadeaeaf0bc33d4e5451f6fa302e40ad\"" Mar 2 13:09:33.766789 containerd[1461]: time="2026-03-02T13:09:33.766743373Z" level=info msg="StartContainer for \"d5b8694ddb0ef4ca1d545c6f2d66da7baadeaeaf0bc33d4e5451f6fa302e40ad\"" Mar 2 13:09:33.823844 systemd[1]: Started cri-containerd-d5b8694ddb0ef4ca1d545c6f2d66da7baadeaeaf0bc33d4e5451f6fa302e40ad.scope - libcontainer container d5b8694ddb0ef4ca1d545c6f2d66da7baadeaeaf0bc33d4e5451f6fa302e40ad. Mar 2 13:09:33.923737 kubelet[2528]: I0302 13:09:33.923075 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7d7658d587-p92rw" podStartSLOduration=19.410196996 podStartE2EDuration="25.923056743s" podCreationTimestamp="2026-03-02 13:09:08 +0000 UTC" firstStartedPulling="2026-03-02 13:09:26.274293521 +0000 UTC m=+35.532034878" lastFinishedPulling="2026-03-02 13:09:32.787153267 +0000 UTC m=+42.044894625" observedRunningTime="2026-03-02 13:09:33.906883967 +0000 UTC m=+43.164625355" watchObservedRunningTime="2026-03-02 13:09:33.923056743 +0000 UTC m=+43.180798101" Mar 2 13:09:33.952037 containerd[1461]: time="2026-03-02T13:09:33.951854630Z" level=info msg="StartContainer for \"d5b8694ddb0ef4ca1d545c6f2d66da7baadeaeaf0bc33d4e5451f6fa302e40ad\" returns successfully" Mar 2 13:09:34.628810 containerd[1461]: time="2026-03-02T13:09:34.628695900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:34.634966 containerd[1461]: time="2026-03-02T13:09:34.634897860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=14702266" Mar 2 13:09:34.636191 containerd[1461]: time="2026-03-02T13:09:34.636101324Z" level=info msg="ImageCreate event name:\"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:34.638756 containerd[1461]: time="2026-03-02T13:09:34.638670801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:34.639842 containerd[1461]: time="2026-03-02T13:09:34.639793103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"16258263\" in 900.068461ms" Mar 2 13:09:34.639957 containerd[1461]: time="2026-03-02T13:09:34.639840832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\"" Mar 2 13:09:34.641468 containerd[1461]: time="2026-03-02T13:09:34.641423212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 13:09:34.646272 containerd[1461]: time="2026-03-02T13:09:34.646220397Z" level=info msg="CreateContainer within sandbox \"f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 13:09:34.667997 containerd[1461]: time="2026-03-02T13:09:34.667826040Z" level=info msg="CreateContainer within sandbox \"f68248015f4ef8de888a28ff910992b9a4e5457f963389f68b51a067aff974de\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"17a4834c72d101cd1c1f13aaaaaaaaa1d215286d4238b6812167c32e803c998e\"" Mar 2 13:09:34.668789 containerd[1461]: time="2026-03-02T13:09:34.668734898Z" level=info msg="StartContainer for \"17a4834c72d101cd1c1f13aaaaaaaaa1d215286d4238b6812167c32e803c998e\"" Mar 2 13:09:34.714968 systemd[1]: Started cri-containerd-17a4834c72d101cd1c1f13aaaaaaaaa1d215286d4238b6812167c32e803c998e.scope - libcontainer container 17a4834c72d101cd1c1f13aaaaaaaaa1d215286d4238b6812167c32e803c998e. Mar 2 13:09:34.756354 containerd[1461]: time="2026-03-02T13:09:34.756312137Z" level=info msg="StartContainer for \"17a4834c72d101cd1c1f13aaaaaaaaa1d215286d4238b6812167c32e803c998e\" returns successfully" Mar 2 13:09:34.899697 kubelet[2528]: I0302 13:09:34.899530 2528 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:09:34.923691 kubelet[2528]: I0302 13:09:34.923199 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-dpl5p" podStartSLOduration=17.34083361 podStartE2EDuration="25.923183083s" podCreationTimestamp="2026-03-02 13:09:09 +0000 UTC" firstStartedPulling="2026-03-02 13:09:26.058912177 +0000 UTC m=+35.316653555" lastFinishedPulling="2026-03-02 13:09:34.64126167 +0000 UTC m=+43.899003028" observedRunningTime="2026-03-02 13:09:34.920579098 +0000 UTC m=+44.178320456" watchObservedRunningTime="2026-03-02 13:09:34.923183083 +0000 UTC m=+44.180924442" Mar 2 13:09:34.925091 kubelet[2528]: I0302 13:09:34.924517 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5db5ff8f5c-6clqc" podStartSLOduration=20.401432146 podStartE2EDuration="26.924506004s" podCreationTimestamp="2026-03-02 13:09:08 +0000 UTC" firstStartedPulling="2026-03-02 13:09:26.456944476 +0000 UTC m=+35.714685834" lastFinishedPulling="2026-03-02 13:09:32.980018334 +0000 UTC m=+42.237759692" observedRunningTime="2026-03-02 13:09:33.925916216 +0000 UTC m=+43.183657584" watchObservedRunningTime="2026-03-02 13:09:34.924506004 +0000 UTC m=+44.182247362" Mar 2 13:09:35.429699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933483227.mount: Deactivated successfully. Mar 2 13:09:35.504357 kubelet[2528]: I0302 13:09:35.504276 2528 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 13:09:35.504357 kubelet[2528]: I0302 13:09:35.504335 2528 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 13:09:35.528027 containerd[1461]: time="2026-03-02T13:09:35.527548383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:35.530983 containerd[1461]: time="2026-03-02T13:09:35.530936330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=17599119" Mar 2 13:09:35.532520 containerd[1461]: time="2026-03-02T13:09:35.532422201Z" level=info msg="ImageCreate event name:\"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:35.535037 containerd[1461]: time="2026-03-02T13:09:35.534939992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:09:35.535838 containerd[1461]: time="2026-03-02T13:09:35.535772950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"17598949\" in 894.293743ms" Mar 2 13:09:35.535838 containerd[1461]: time="2026-03-02T13:09:35.535820217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\"" Mar 2 13:09:35.543178 containerd[1461]: time="2026-03-02T13:09:35.543081110Z" level=info msg="CreateContainer within sandbox \"4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 13:09:35.556650 containerd[1461]: time="2026-03-02T13:09:35.556537010Z" level=info msg="CreateContainer within sandbox \"4db186c6cd6b1ed5b2ce2a7ba792f2d6c2742d83c3b409b9951e93f079e5807a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5c46bfab21e24e265512bd3d9aa4a85d252bf68ba26cfe664cb8cd1ca5ff8a7f\"" Mar 2 13:09:35.557641 containerd[1461]: time="2026-03-02T13:09:35.557273537Z" level=info msg="StartContainer for \"5c46bfab21e24e265512bd3d9aa4a85d252bf68ba26cfe664cb8cd1ca5ff8a7f\"" Mar 2 13:09:35.595777 systemd[1]: Started cri-containerd-5c46bfab21e24e265512bd3d9aa4a85d252bf68ba26cfe664cb8cd1ca5ff8a7f.scope - libcontainer container 5c46bfab21e24e265512bd3d9aa4a85d252bf68ba26cfe664cb8cd1ca5ff8a7f. Mar 2 13:09:35.668673 containerd[1461]: time="2026-03-02T13:09:35.667594823Z" level=info msg="StartContainer for \"5c46bfab21e24e265512bd3d9aa4a85d252bf68ba26cfe664cb8cd1ca5ff8a7f\" returns successfully" Mar 2 13:09:38.376775 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:59502.service - OpenSSH per-connection server daemon (10.0.0.1:59502). Mar 2 13:09:38.477389 sshd[5385]: Accepted publickey for core from 10.0.0.1 port 59502 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:09:38.479854 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:38.484977 systemd-logind[1450]: New session 8 of user core. Mar 2 13:09:38.493762 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:09:38.891652 sshd[5385]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:38.896014 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:59502.service: Deactivated successfully. Mar 2 13:09:38.898496 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:09:38.899417 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:09:38.901241 systemd-logind[1450]: Removed session 8. Mar 2 13:09:43.909003 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:36296.service - OpenSSH per-connection server daemon (10.0.0.1:36296). Mar 2 13:09:43.954081 sshd[5427]: Accepted publickey for core from 10.0.0.1 port 36296 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:09:43.955839 sshd[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:43.960645 systemd-logind[1450]: New session 9 of user core. Mar 2 13:09:43.968804 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:09:44.114513 sshd[5427]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:44.119576 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:36296.service: Deactivated successfully. Mar 2 13:09:44.121551 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:09:44.122349 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:09:44.123804 systemd-logind[1450]: Removed session 9. Mar 2 13:09:49.126471 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:36310.service - OpenSSH per-connection server daemon (10.0.0.1:36310). Mar 2 13:09:49.182578 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 36310 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:09:49.185351 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:49.191070 systemd-logind[1450]: New session 10 of user core. Mar 2 13:09:49.197862 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:09:49.327484 sshd[5442]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:49.332527 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:36310.service: Deactivated successfully. Mar 2 13:09:49.335587 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:09:49.336976 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:09:49.338436 systemd-logind[1450]: Removed session 10. Mar 2 13:09:50.886074 containerd[1461]: time="2026-03-02T13:09:50.885815102Z" level=info msg="StopPodSandbox for \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\"" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:50.963 [WARNING][5482] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0", GenerateName:"calico-kube-controllers-7dfd68d47b-", Namespace:"calico-system", SelfLink:"", UID:"344f0efe-0720-4de3-b1fe-8bb01ab80e58", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd68d47b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b", Pod:"calico-kube-controllers-7dfd68d47b-n9g4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d4a920f4e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:50.964 [INFO][5482] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:50.964 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" iface="eth0" netns="" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:50.964 [INFO][5482] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:50.964 [INFO][5482] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:51.036 [INFO][5492] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:51.036 [INFO][5492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:51.036 [INFO][5492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:51.044 [WARNING][5492] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:51.044 [INFO][5492] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:51.046 [INFO][5492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.053008 containerd[1461]: 2026-03-02 13:09:51.049 [INFO][5482] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.060054 containerd[1461]: time="2026-03-02T13:09:51.059964918Z" level=info msg="TearDown network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\" successfully" Mar 2 13:09:51.060054 containerd[1461]: time="2026-03-02T13:09:51.060030681Z" level=info msg="StopPodSandbox for \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\" returns successfully" Mar 2 13:09:51.082461 containerd[1461]: time="2026-03-02T13:09:51.082370065Z" level=info msg="RemovePodSandbox for \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\"" Mar 2 13:09:51.084636 containerd[1461]: time="2026-03-02T13:09:51.084557506Z" level=info msg="Forcibly stopping sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\"" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.126 [WARNING][5511] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0", GenerateName:"calico-kube-controllers-7dfd68d47b-", Namespace:"calico-system", SelfLink:"", UID:"344f0efe-0720-4de3-b1fe-8bb01ab80e58", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfd68d47b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b0fe52ad47a5f74fef13530c09244070b49398efee29da05f57a1c20eccdb3b", Pod:"calico-kube-controllers-7dfd68d47b-n9g4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d4a920f4e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.127 [INFO][5511] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.127 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" iface="eth0" netns="" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.127 [INFO][5511] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.127 [INFO][5511] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.156 [INFO][5520] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.156 [INFO][5520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.156 [INFO][5520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.163 [WARNING][5520] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.163 [INFO][5520] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" HandleID="k8s-pod-network.a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Workload="localhost-k8s-calico--kube--controllers--7dfd68d47b--n9g4h-eth0" Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.165 [INFO][5520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.171375 containerd[1461]: 2026-03-02 13:09:51.168 [INFO][5511] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965" Mar 2 13:09:51.171375 containerd[1461]: time="2026-03-02T13:09:51.171352929Z" level=info msg="TearDown network for sandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\" successfully" Mar 2 13:09:51.194465 containerd[1461]: time="2026-03-02T13:09:51.194428483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:09:51.223184 containerd[1461]: time="2026-03-02T13:09:51.223079316Z" level=info msg="RemovePodSandbox \"a60c456080831ecb2c9ec8de4972b052544b12c3aba2b55c21a49f319abd2965\" returns successfully" Mar 2 13:09:51.231011 containerd[1461]: time="2026-03-02T13:09:51.230941526Z" level=info msg="StopPodSandbox for \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\"" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.271 [WARNING][5538] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pdthq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"31943970-ce49-40d1-8211-0cc4a4d2c4a7", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda", Pod:"coredns-7d764666f9-pdthq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib56dfe41e0d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.271 [INFO][5538] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.271 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" iface="eth0" netns="" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.271 [INFO][5538] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.271 [INFO][5538] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.298 [INFO][5546] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.299 [INFO][5546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.299 [INFO][5546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.304 [WARNING][5546] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.304 [INFO][5546] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.306 [INFO][5546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.311950 containerd[1461]: 2026-03-02 13:09:51.309 [INFO][5538] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.312363 containerd[1461]: time="2026-03-02T13:09:51.311973336Z" level=info msg="TearDown network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\" successfully" Mar 2 13:09:51.312363 containerd[1461]: time="2026-03-02T13:09:51.311999135Z" level=info msg="StopPodSandbox for \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\" returns successfully" Mar 2 13:09:51.313236 containerd[1461]: time="2026-03-02T13:09:51.312853446Z" level=info msg="RemovePodSandbox for \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\"" Mar 2 13:09:51.313236 containerd[1461]: time="2026-03-02T13:09:51.312933836Z" level=info msg="Forcibly stopping sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\"" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.352 [WARNING][5563] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pdthq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"31943970-ce49-40d1-8211-0cc4a4d2c4a7", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77e039ad4eb367e3566d51e410acedbf400b3f67d0d25a1bd669e4fb6d0a0cda", Pod:"coredns-7d764666f9-pdthq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib56dfe41e0d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.353 [INFO][5563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.353 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" iface="eth0" netns="" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.353 [INFO][5563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.353 [INFO][5563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.382 [INFO][5572] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.382 [INFO][5572] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.382 [INFO][5572] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.389 [WARNING][5572] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.389 [INFO][5572] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" HandleID="k8s-pod-network.8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Workload="localhost-k8s-coredns--7d764666f9--pdthq-eth0" Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.391 [INFO][5572] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.398834 containerd[1461]: 2026-03-02 13:09:51.395 [INFO][5563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8" Mar 2 13:09:51.398834 containerd[1461]: time="2026-03-02T13:09:51.398154341Z" level=info msg="TearDown network for sandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\" successfully" Mar 2 13:09:51.403794 containerd[1461]: time="2026-03-02T13:09:51.403743557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:09:51.404157 containerd[1461]: time="2026-03-02T13:09:51.404062261Z" level=info msg="RemovePodSandbox \"8a3e73bb3af2e5709d3b7709b02cb3ff5d95e3ddc9d27a7ba55b60fa1b0354f8\" returns successfully" Mar 2 13:09:51.404938 containerd[1461]: time="2026-03-02T13:09:51.404840128Z" level=info msg="StopPodSandbox for \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\"" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.458 [WARNING][5589] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" WorkloadEndpoint="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.458 [INFO][5589] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.458 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" iface="eth0" netns="" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.458 [INFO][5589] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.458 [INFO][5589] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.490 [INFO][5598] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.490 [INFO][5598] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.490 [INFO][5598] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.496 [WARNING][5598] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.496 [INFO][5598] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.499 [INFO][5598] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.505356 containerd[1461]: 2026-03-02 13:09:51.502 [INFO][5589] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.505803 containerd[1461]: time="2026-03-02T13:09:51.505400005Z" level=info msg="TearDown network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\" successfully" Mar 2 13:09:51.505803 containerd[1461]: time="2026-03-02T13:09:51.505426785Z" level=info msg="StopPodSandbox for \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\" returns successfully" Mar 2 13:09:51.506197 containerd[1461]: time="2026-03-02T13:09:51.506156319Z" level=info msg="RemovePodSandbox for \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\"" Mar 2 13:09:51.506320 containerd[1461]: time="2026-03-02T13:09:51.506207294Z" level=info msg="Forcibly stopping sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\"" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.550 [WARNING][5614] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" WorkloadEndpoint="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.550 [INFO][5614] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.550 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" iface="eth0" netns="" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.550 [INFO][5614] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.550 [INFO][5614] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.575 [INFO][5622] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.576 [INFO][5622] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.576 [INFO][5622] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.584 [WARNING][5622] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.584 [INFO][5622] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" HandleID="k8s-pod-network.fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Workload="localhost-k8s-whisker--67b5486978--26wb5-eth0" Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.587 [INFO][5622] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.595289 containerd[1461]: 2026-03-02 13:09:51.590 [INFO][5614] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d" Mar 2 13:09:51.595693 containerd[1461]: time="2026-03-02T13:09:51.595367807Z" level=info msg="TearDown network for sandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\" successfully" Mar 2 13:09:51.602228 containerd[1461]: time="2026-03-02T13:09:51.601494531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:09:51.602228 containerd[1461]: time="2026-03-02T13:09:51.601584530Z" level=info msg="RemovePodSandbox \"fde9548838e6d7e608bd9adcf616e1e42127a08b099159516eaa1f86ef128c2d\" returns successfully" Mar 2 13:09:51.603144 containerd[1461]: time="2026-03-02T13:09:51.603067897Z" level=info msg="StopPodSandbox for \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\"" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.655 [WARNING][5645] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"d8f412a5-1446-4c06-b684-ffc308baee40", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0", Pod:"calico-apiserver-5db5ff8f5c-6clqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac916584804", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.655 [INFO][5645] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.655 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" iface="eth0" netns="" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.655 [INFO][5645] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.655 [INFO][5645] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.689 [INFO][5654] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.689 [INFO][5654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.689 [INFO][5654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.695 [WARNING][5654] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.695 [INFO][5654] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.697 [INFO][5654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.703330 containerd[1461]: 2026-03-02 13:09:51.700 [INFO][5645] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.703330 containerd[1461]: time="2026-03-02T13:09:51.703177203Z" level=info msg="TearDown network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\" successfully" Mar 2 13:09:51.703330 containerd[1461]: time="2026-03-02T13:09:51.703203091Z" level=info msg="StopPodSandbox for \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\" returns successfully" Mar 2 13:09:51.704770 containerd[1461]: time="2026-03-02T13:09:51.704576014Z" level=info msg="RemovePodSandbox for \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\"" Mar 2 13:09:51.704770 containerd[1461]: time="2026-03-02T13:09:51.704773483Z" level=info msg="Forcibly stopping sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\"" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.755 [WARNING][5673] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"d8f412a5-1446-4c06-b684-ffc308baee40", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c67b04615555a55d14da1df1188bdb4a5454f40504dc20695d6068597ac916a0", Pod:"calico-apiserver-5db5ff8f5c-6clqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac916584804", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.756 [INFO][5673] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.756 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" iface="eth0" netns="" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.756 [INFO][5673] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.756 [INFO][5673] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.790 [INFO][5684] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.791 [INFO][5684] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.791 [INFO][5684] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.799 [WARNING][5684] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.799 [INFO][5684] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" HandleID="k8s-pod-network.3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--6clqc-eth0" Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.801 [INFO][5684] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.809794 containerd[1461]: 2026-03-02 13:09:51.805 [INFO][5673] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067" Mar 2 13:09:51.809794 containerd[1461]: time="2026-03-02T13:09:51.809735670Z" level=info msg="TearDown network for sandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\" successfully" Mar 2 13:09:51.814561 containerd[1461]: time="2026-03-02T13:09:51.814503394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:09:51.814690 containerd[1461]: time="2026-03-02T13:09:51.814572704Z" level=info msg="RemovePodSandbox \"3584722aa6abd1a58393c05df738299bbd0a95c982d0f80246573e3b4930e067\" returns successfully" Mar 2 13:09:51.815346 containerd[1461]: time="2026-03-02T13:09:51.815289119Z" level=info msg="StopPodSandbox for \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\"" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.862 [WARNING][5709] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--p92rw-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"4e957ef7-0903-4b10-8258-cb24b1bc5ba6", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07", Pod:"goldmane-7d7658d587-p92rw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1db14a45ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.863 [INFO][5709] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.863 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" iface="eth0" netns="" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.863 [INFO][5709] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.863 [INFO][5709] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.893 [INFO][5717] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.893 [INFO][5717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.893 [INFO][5717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.899 [WARNING][5717] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.899 [INFO][5717] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.901 [INFO][5717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:51.907286 containerd[1461]: 2026-03-02 13:09:51.904 [INFO][5709] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:51.908472 containerd[1461]: time="2026-03-02T13:09:51.907346019Z" level=info msg="TearDown network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\" successfully" Mar 2 13:09:51.908472 containerd[1461]: time="2026-03-02T13:09:51.907381474Z" level=info msg="StopPodSandbox for \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\" returns successfully" Mar 2 13:09:51.908472 containerd[1461]: time="2026-03-02T13:09:51.908282063Z" level=info msg="RemovePodSandbox for \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\"" Mar 2 13:09:51.908472 containerd[1461]: time="2026-03-02T13:09:51.908317629Z" level=info msg="Forcibly stopping sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\"" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.956 [WARNING][5734] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--p92rw-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"4e957ef7-0903-4b10-8258-cb24b1bc5ba6", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8ff8c12988a1b6d480ecd840a5644b8403786c0349f2d95081bf758d3183f07", Pod:"goldmane-7d7658d587-p92rw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1db14a45ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.956 [INFO][5734] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.956 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" iface="eth0" netns="" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.956 [INFO][5734] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.956 [INFO][5734] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.984 [INFO][5742] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.985 [INFO][5742] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.985 [INFO][5742] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.991 [WARNING][5742] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.991 [INFO][5742] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" HandleID="k8s-pod-network.853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Workload="localhost-k8s-goldmane--7d7658d587--p92rw-eth0" Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.994 [INFO][5742] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:52.000092 containerd[1461]: 2026-03-02 13:09:51.996 [INFO][5734] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577" Mar 2 13:09:52.000092 containerd[1461]: time="2026-03-02T13:09:52.000080517Z" level=info msg="TearDown network for sandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\" successfully" Mar 2 13:09:52.004714 containerd[1461]: time="2026-03-02T13:09:52.004671927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:09:52.004714 containerd[1461]: time="2026-03-02T13:09:52.004747048Z" level=info msg="RemovePodSandbox \"853b459df7600e3d66cc22700d630103cbb6a5e891e7c771ea64f700b48dd577\" returns successfully" Mar 2 13:09:52.005553 containerd[1461]: time="2026-03-02T13:09:52.005498913Z" level=info msg="StopPodSandbox for \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\"" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.050 [WARNING][5760] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--nx776-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"602aa150-36ee-49fc-9e50-ac7469877eb8", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90", Pod:"coredns-7d764666f9-nx776", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84c11d74a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.051 [INFO][5760] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.051 [INFO][5760] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" iface="eth0" netns="" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.051 [INFO][5760] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.051 [INFO][5760] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.084 [INFO][5768] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.084 [INFO][5768] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.084 [INFO][5768] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.090 [WARNING][5768] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.090 [INFO][5768] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.092 [INFO][5768] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:52.098365 containerd[1461]: 2026-03-02 13:09:52.095 [INFO][5760] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.098365 containerd[1461]: time="2026-03-02T13:09:52.098330943Z" level=info msg="TearDown network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\" successfully" Mar 2 13:09:52.098365 containerd[1461]: time="2026-03-02T13:09:52.098360878Z" level=info msg="StopPodSandbox for \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\" returns successfully" Mar 2 13:09:52.099195 containerd[1461]: time="2026-03-02T13:09:52.099127745Z" level=info msg="RemovePodSandbox for \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\"" Mar 2 13:09:52.099236 containerd[1461]: time="2026-03-02T13:09:52.099201390Z" level=info msg="Forcibly stopping sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\"" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.148 [WARNING][5785] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--nx776-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"602aa150-36ee-49fc-9e50-ac7469877eb8", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deb4e271863f3074e4a6f1163e197586d819515ab92cc4a85fca10c2bb5d9f90", Pod:"coredns-7d764666f9-nx776", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84c11d74a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.148 [INFO][5785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.148 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" iface="eth0" netns="" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.148 [INFO][5785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.148 [INFO][5785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.181 [INFO][5793] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.181 [INFO][5793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.181 [INFO][5793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.188 [WARNING][5793] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.188 [INFO][5793] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" HandleID="k8s-pod-network.803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Workload="localhost-k8s-coredns--7d764666f9--nx776-eth0" Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.190 [INFO][5793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:52.195334 containerd[1461]: 2026-03-02 13:09:52.192 [INFO][5785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2" Mar 2 13:09:52.195814 containerd[1461]: time="2026-03-02T13:09:52.195364687Z" level=info msg="TearDown network for sandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\" successfully" Mar 2 13:09:52.200721 containerd[1461]: time="2026-03-02T13:09:52.200581205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:09:52.200939 containerd[1461]: time="2026-03-02T13:09:52.200750500Z" level=info msg="RemovePodSandbox \"803a33506f6d18f7d6572f01aa168f2b3f465f88b71f78d38b95a58e4462d5d2\" returns successfully" Mar 2 13:09:52.201408 containerd[1461]: time="2026-03-02T13:09:52.201316816Z" level=info msg="StopPodSandbox for \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\"" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.253 [WARNING][5810] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"0398d338-e506-4e7c-91f8-28e85a11fc76", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6", Pod:"calico-apiserver-5db5ff8f5c-zh52r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac4d596e711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.253 [INFO][5810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.253 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" iface="eth0" netns="" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.253 [INFO][5810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.253 [INFO][5810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.280 [INFO][5819] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.280 [INFO][5819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.280 [INFO][5819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.294 [WARNING][5819] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.294 [INFO][5819] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.297 [INFO][5819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:52.302932 containerd[1461]: 2026-03-02 13:09:52.300 [INFO][5810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.303405 containerd[1461]: time="2026-03-02T13:09:52.302937933Z" level=info msg="TearDown network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\" successfully" Mar 2 13:09:52.303405 containerd[1461]: time="2026-03-02T13:09:52.302981064Z" level=info msg="StopPodSandbox for \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\" returns successfully" Mar 2 13:09:52.303847 containerd[1461]: time="2026-03-02T13:09:52.303592274Z" level=info msg="RemovePodSandbox for \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\"" Mar 2 13:09:52.303847 containerd[1461]: time="2026-03-02T13:09:52.303692190Z" level=info msg="Forcibly stopping sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\"" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.350 [WARNING][5837] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0", GenerateName:"calico-apiserver-5db5ff8f5c-", Namespace:"calico-system", SelfLink:"", UID:"0398d338-e506-4e7c-91f8-28e85a11fc76", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db5ff8f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f801c1fdc5fc0bc77ab6d3bb1b98e59a89e985c977c90f90ee1a756be493dc6", Pod:"calico-apiserver-5db5ff8f5c-zh52r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliac4d596e711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.350 [INFO][5837] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.350 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" iface="eth0" netns="" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.350 [INFO][5837] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.350 [INFO][5837] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.383 [INFO][5845] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.383 [INFO][5845] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.383 [INFO][5845] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.391 [WARNING][5845] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.391 [INFO][5845] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" HandleID="k8s-pod-network.e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Workload="localhost-k8s-calico--apiserver--5db5ff8f5c--zh52r-eth0" Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.394 [INFO][5845] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:09:52.400795 containerd[1461]: 2026-03-02 13:09:52.397 [INFO][5837] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9" Mar 2 13:09:52.401203 containerd[1461]: time="2026-03-02T13:09:52.400779656Z" level=info msg="TearDown network for sandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\" successfully" Mar 2 13:09:52.409044 containerd[1461]: time="2026-03-02T13:09:52.408938556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:09:52.409044 containerd[1461]: time="2026-03-02T13:09:52.409029716Z" level=info msg="RemovePodSandbox \"e7176b02b3e309fbeee991a39497f7db65608465a6c0ab8f8d261d58fd513ae9\" returns successfully" Mar 2 13:09:54.341822 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:56702.service - OpenSSH per-connection server daemon (10.0.0.1:56702). Mar 2 13:09:54.410111 sshd[5854]: Accepted publickey for core from 10.0.0.1 port 56702 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:09:54.412354 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:54.417493 systemd-logind[1450]: New session 11 of user core. Mar 2 13:09:54.424772 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:09:54.563833 sshd[5854]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:54.573083 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:56702.service: Deactivated successfully. Mar 2 13:09:54.575445 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:09:54.576318 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:09:54.585111 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:56704.service - OpenSSH per-connection server daemon (10.0.0.1:56704). Mar 2 13:09:54.586705 systemd-logind[1450]: Removed session 11. Mar 2 13:09:54.622544 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 56704 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:09:54.624441 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:54.630027 systemd-logind[1450]: New session 12 of user core. Mar 2 13:09:54.638914 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:09:54.870142 sshd[5870]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:54.881274 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:56704.service: Deactivated successfully. Mar 2 13:09:54.885052 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:09:54.889736 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:09:54.904112 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:56718.service - OpenSSH per-connection server daemon (10.0.0.1:56718). Mar 2 13:09:54.908330 systemd-logind[1450]: Removed session 12. Mar 2 13:09:54.966010 sshd[5883]: Accepted publickey for core from 10.0.0.1 port 56718 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:09:54.970404 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:54.979800 systemd-logind[1450]: New session 13 of user core. Mar 2 13:09:54.992976 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:09:57.866492 sshd[5883]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:57.883185 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:56718.service: Deactivated successfully. Mar 2 13:09:57.897373 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:09:57.898941 systemd[1]: session-13.scope: Consumed 2.632s CPU time. Mar 2 13:09:57.903511 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:09:57.906265 systemd-logind[1450]: Removed session 13. Mar 2 13:09:58.384687 kubelet[2528]: I0302 13:09:58.383243 2528 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-8554fc6696-qx8nl" podStartSLOduration=24.93893222 podStartE2EDuration="33.382953136s" podCreationTimestamp="2026-03-02 13:09:25 +0000 UTC" firstStartedPulling="2026-03-02 13:09:27.092955169 +0000 UTC m=+36.350696537" lastFinishedPulling="2026-03-02 13:09:35.536976096 +0000 UTC m=+44.794717453" observedRunningTime="2026-03-02 13:09:35.920114054 +0000 UTC m=+45.177855443" watchObservedRunningTime="2026-03-02 13:09:58.382953136 +0000 UTC m=+67.640694493" Mar 2 13:09:59.899908 systemd[1]: run-containerd-runc-k8s.io-cf7261932ca78fdea56c93f22671579b06a7e16ab6d63e81ff245bbc80460d56-runc.8KREFf.mount: Deactivated successfully. Mar 2 13:10:02.875854 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:56710.service - OpenSSH per-connection server daemon (10.0.0.1:56710). Mar 2 13:10:02.946411 sshd[5963]: Accepted publickey for core from 10.0.0.1 port 56710 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:02.948777 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:02.955681 systemd-logind[1450]: New session 14 of user core. Mar 2 13:10:02.971007 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:10:03.146711 sshd[5963]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:03.152931 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:56710.service: Deactivated successfully. Mar 2 13:10:03.155833 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:10:03.157108 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:10:03.158796 systemd-logind[1450]: Removed session 14. Mar 2 13:10:05.934295 systemd[1]: run-containerd-runc-k8s.io-6c4408cf8c19af7feb1fcc6b32681ec4f74b331e978a5e41e2628ccc56426554-runc.BN5djZ.mount: Deactivated successfully. Mar 2 13:10:08.163255 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:56724.service - OpenSSH per-connection server daemon (10.0.0.1:56724). Mar 2 13:10:08.236125 sshd[6003]: Accepted publickey for core from 10.0.0.1 port 56724 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:08.238320 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:08.244849 systemd-logind[1450]: New session 15 of user core. Mar 2 13:10:08.259851 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:10:08.426835 sshd[6003]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:08.435171 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:56724.service: Deactivated successfully. Mar 2 13:10:08.436963 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:10:08.438635 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:10:08.444971 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:56728.service - OpenSSH per-connection server daemon (10.0.0.1:56728). Mar 2 13:10:08.446161 systemd-logind[1450]: Removed session 15. Mar 2 13:10:08.482479 sshd[6017]: Accepted publickey for core from 10.0.0.1 port 56728 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:08.484174 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:08.488936 systemd-logind[1450]: New session 16 of user core. Mar 2 13:10:08.495764 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:10:08.744978 sshd[6017]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:08.754389 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:56728.service: Deactivated successfully. Mar 2 13:10:08.756181 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:10:08.758175 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:10:08.763399 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:56734.service - OpenSSH per-connection server daemon (10.0.0.1:56734). Mar 2 13:10:08.764725 systemd-logind[1450]: Removed session 16. Mar 2 13:10:08.816741 sshd[6029]: Accepted publickey for core from 10.0.0.1 port 56734 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:08.818553 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:08.823472 systemd-logind[1450]: New session 17 of user core. Mar 2 13:10:08.834768 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:10:09.358267 sshd[6029]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:09.371762 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:56734.service: Deactivated successfully. Mar 2 13:10:09.377815 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:10:09.379393 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:10:09.390805 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:56750.service - OpenSSH per-connection server daemon (10.0.0.1:56750). Mar 2 13:10:09.394207 systemd-logind[1450]: Removed session 17. Mar 2 13:10:09.451476 sshd[6054]: Accepted publickey for core from 10.0.0.1 port 56750 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:09.453206 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:09.457863 systemd-logind[1450]: New session 18 of user core. Mar 2 13:10:09.467824 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:10:09.846956 sshd[6054]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:09.860021 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:56750.service: Deactivated successfully. Mar 2 13:10:09.864695 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:10:09.868132 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:10:09.877120 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:56758.service - OpenSSH per-connection server daemon (10.0.0.1:56758). Mar 2 13:10:09.878696 systemd-logind[1450]: Removed session 18. Mar 2 13:10:09.917362 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 56758 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:09.919911 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:09.925342 systemd-logind[1450]: New session 19 of user core. Mar 2 13:10:09.931906 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:10:10.082113 sshd[6078]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:10.086860 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:56758.service: Deactivated successfully. Mar 2 13:10:10.089709 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:10:10.090726 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:10:10.092280 systemd-logind[1450]: Removed session 19. Mar 2 13:10:10.932010 kubelet[2528]: E0302 13:10:10.931918 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:11.914763 kubelet[2528]: E0302 13:10:11.914704 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:15.095820 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:51388.service - OpenSSH per-connection server daemon (10.0.0.1:51388). Mar 2 13:10:15.136638 sshd[6096]: Accepted publickey for core from 10.0.0.1 port 51388 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:15.138291 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:15.143224 systemd-logind[1450]: New session 20 of user core. Mar 2 13:10:15.156803 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:10:15.273826 sshd[6096]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:15.277712 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:51388.service: Deactivated successfully. Mar 2 13:10:15.279858 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:10:15.280742 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:10:15.282092 systemd-logind[1450]: Removed session 20. Mar 2 13:10:15.922896 update_engine[1452]: I20260302 13:10:15.922761 1452 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 2 13:10:15.922896 update_engine[1452]: I20260302 13:10:15.922847 1452 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 2 13:10:15.924225 update_engine[1452]: I20260302 13:10:15.924175 1452 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 2 13:10:15.924835 update_engine[1452]: I20260302 13:10:15.924791 1452 omaha_request_params.cc:62] Current group set to lts Mar 2 13:10:15.925175 update_engine[1452]: I20260302 13:10:15.925128 1452 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 2 13:10:15.925175 update_engine[1452]: I20260302 13:10:15.925158 1452 update_attempter.cc:643] Scheduling an action processor start. Mar 2 13:10:15.925237 update_engine[1452]: I20260302 13:10:15.925179 1452 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 13:10:15.925324 update_engine[1452]: I20260302 13:10:15.925297 1452 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 2 13:10:15.925428 update_engine[1452]: I20260302 13:10:15.925400 1452 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 13:10:15.925449 update_engine[1452]: I20260302 13:10:15.925426 1452 omaha_request_action.cc:272] Request: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925449 update_engine[1452]: Mar 2 13:10:15.925870 update_engine[1452]: I20260302 13:10:15.925485 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:10:15.931791 locksmithd[1486]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 2 13:10:15.932469 update_engine[1452]: I20260302 13:10:15.932431 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:10:15.932991 update_engine[1452]: I20260302 13:10:15.932910 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:10:15.948399 update_engine[1452]: E20260302 13:10:15.948346 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:10:15.948486 update_engine[1452]: I20260302 13:10:15.948431 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 2 13:10:18.914715 kubelet[2528]: E0302 13:10:18.914529 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:18.914715 kubelet[2528]: E0302 13:10:18.914547 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:20.293663 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:51398.service - OpenSSH per-connection server daemon (10.0.0.1:51398). Mar 2 13:10:20.375045 sshd[6132]: Accepted publickey for core from 10.0.0.1 port 51398 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:20.377560 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:20.383317 systemd-logind[1450]: New session 21 of user core. Mar 2 13:10:20.391021 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:10:20.623055 sshd[6132]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:20.626784 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:51398.service: Deactivated successfully. Mar 2 13:10:20.629028 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:10:20.631511 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:10:20.633041 systemd-logind[1450]: Removed session 21. Mar 2 13:10:25.635213 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:56074.service - OpenSSH per-connection server daemon (10.0.0.1:56074). Mar 2 13:10:25.676672 sshd[6146]: Accepted publickey for core from 10.0.0.1 port 56074 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:10:25.678085 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:25.688257 systemd-logind[1450]: New session 22 of user core. Mar 2 13:10:25.695780 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:10:25.855761 sshd[6146]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:25.860799 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:56074.service: Deactivated successfully. Mar 2 13:10:25.863450 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:10:25.864276 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:10:25.865684 systemd-logind[1450]: Removed session 22. Mar 2 13:10:25.903519 update_engine[1452]: I20260302 13:10:25.903340 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:10:25.904593 update_engine[1452]: I20260302 13:10:25.904428 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:10:25.904917 update_engine[1452]: I20260302 13:10:25.904835 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:10:25.923216 update_engine[1452]: E20260302 13:10:25.923125 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:10:25.923301 update_engine[1452]: I20260302 13:10:25.923230 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 2