Mar 2 12:54:49.361241 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 12:54:49.361297 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:49.361309 kernel: BIOS-provided physical RAM map: Mar 2 12:54:49.361315 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 12:54:49.361322 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 12:54:49.361332 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 12:54:49.361344 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 12:54:49.361354 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 12:54:49.361364 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 2 12:54:49.361375 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 2 12:54:49.361388 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 2 12:54:49.361397 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 2 12:54:49.361406 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 2 12:54:49.361416 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 2 12:54:49.361427 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 2 12:54:49.361437 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 12:54:49.361450 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 2 12:54:49.361461 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 2 12:54:49.361471 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 12:54:49.361541 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 12:54:49.361555 kernel: NX (Execute Disable) protection: active Mar 2 12:54:49.361565 kernel: APIC: Static calls initialized Mar 2 12:54:49.361577 kernel: efi: EFI v2.7 by EDK II Mar 2 12:54:49.361587 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 2 12:54:49.361597 kernel: SMBIOS 2.8 present. Mar 2 12:54:49.361609 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 2 12:54:49.361619 kernel: Hypervisor detected: KVM Mar 2 12:54:49.361635 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 12:54:49.361645 kernel: kvm-clock: using sched offset of 7315576482 cycles Mar 2 12:54:49.361657 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 12:54:49.361668 kernel: tsc: Detected 2445.424 MHz processor Mar 2 12:54:49.361679 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 12:54:49.361689 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 12:54:49.361700 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 2 12:54:49.361712 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 12:54:49.361722 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 12:54:49.361737 kernel: Using GB pages for direct mapping Mar 2 12:54:49.361747 kernel: Secure boot disabled Mar 2 12:54:49.361757 kernel: ACPI: Early table checksum verification disabled Mar 2 12:54:49.361767 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 12:54:49.361782 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 12:54:49.361793 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:49.361803 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:49.361817 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 12:54:49.361828 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:49.361838 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:49.361849 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:49.361859 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:49.361869 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 12:54:49.361880 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 12:54:49.361895 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 12:54:49.361937 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 12:54:49.361944 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 12:54:49.361950 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 12:54:49.361956 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 12:54:49.361963 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 12:54:49.361970 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 12:54:49.361982 kernel: No NUMA configuration found Mar 2 12:54:49.361994 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 2 12:54:49.362011 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 2 12:54:49.362023 kernel: Zone ranges: Mar 2 12:54:49.362034 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 12:54:49.362044 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 2 12:54:49.362054 kernel: Normal empty Mar 2 12:54:49.362064 kernel: Movable zone start for each node Mar 2 12:54:49.362075 kernel: Early memory node ranges Mar 2 12:54:49.362088 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 12:54:49.362098 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 12:54:49.362110 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 12:54:49.362127 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 2 12:54:49.362139 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 2 12:54:49.362150 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 2 12:54:49.362161 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 2 12:54:49.362171 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:54:49.362182 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 12:54:49.362192 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 12:54:49.362203 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:54:49.362213 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 2 12:54:49.362228 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 2 12:54:49.362239 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 2 12:54:49.362249 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 12:54:49.362255 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 12:54:49.362261 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 12:54:49.362270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 12:54:49.362282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 12:54:49.362294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 12:54:49.362304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 12:54:49.362322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 12:54:49.362333 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 12:54:49.362343 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 12:54:49.362352 kernel: TSC deadline timer available Mar 2 12:54:49.362362 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 12:54:49.362371 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 12:54:49.362381 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 12:54:49.362392 kernel: kvm-guest: setup PV sched yield Mar 2 12:54:49.362402 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 2 12:54:49.362417 kernel: Booting paravirtualized kernel on KVM Mar 2 12:54:49.362427 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 12:54:49.362437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 12:54:49.362448 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 12:54:49.362458 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 12:54:49.362468 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 12:54:49.362544 kernel: kvm-guest: PV spinlocks enabled Mar 2 12:54:49.362559 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 12:54:49.362571 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:49.362586 kernel: random: crng init done Mar 2 12:54:49.362597 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 12:54:49.362608 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 12:54:49.362618 kernel: Fallback order for Node 0: 0 Mar 2 12:54:49.362628 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 2 12:54:49.362638 kernel: Policy zone: DMA32 Mar 2 12:54:49.362648 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 12:54:49.362659 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 2 12:54:49.362674 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 12:54:49.362684 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 12:54:49.362695 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 12:54:49.362706 kernel: Dynamic Preempt: voluntary Mar 2 12:54:49.362717 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 12:54:49.362740 kernel: rcu: RCU event tracing is enabled. Mar 2 12:54:49.362812 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 12:54:49.362826 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 12:54:49.362837 kernel: Rude variant of Tasks RCU enabled. Mar 2 12:54:49.362938 kernel: Tracing variant of Tasks RCU enabled. Mar 2 12:54:49.362953 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 12:54:49.362966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 12:54:49.362982 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 12:54:49.362993 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 12:54:49.363004 kernel: Console: colour dummy device 80x25 Mar 2 12:54:49.363015 kernel: printk: console [ttyS0] enabled Mar 2 12:54:49.363026 kernel: ACPI: Core revision 20230628 Mar 2 12:54:49.363037 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 12:54:49.363103 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 12:54:49.363114 kernel: x2apic enabled Mar 2 12:54:49.363126 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 12:54:49.363139 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 12:54:49.363150 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 12:54:49.363162 kernel: kvm-guest: setup PV IPIs Mar 2 12:54:49.363174 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 12:54:49.363264 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 12:54:49.363279 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 2 12:54:49.363299 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 12:54:49.363311 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 12:54:49.363323 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 12:54:49.363336 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 12:54:49.363348 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 12:54:49.363360 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 12:54:49.363372 kernel: Speculative Store Bypass: Vulnerable Mar 2 12:54:49.363383 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 12:54:49.363400 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 12:54:49.363412 kernel: active return thunk: srso_alias_return_thunk Mar 2 12:54:49.363424 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 12:54:49.363435 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 12:54:49.363446 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 12:54:49.363458 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 12:54:49.363469 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 12:54:49.363543 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 12:54:49.363603 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 12:54:49.363626 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 12:54:49.363638 kernel: Freeing SMP alternatives memory: 32K Mar 2 12:54:49.363649 kernel: pid_max: default: 32768 minimum: 301 Mar 2 12:54:49.363661 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 12:54:49.363672 kernel: landlock: Up and running. Mar 2 12:54:49.363683 kernel: SELinux: Initializing. Mar 2 12:54:49.363693 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:54:49.363704 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:54:49.363715 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 12:54:49.363730 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:49.363741 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:49.363752 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:49.363763 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 12:54:49.363774 kernel: signal: max sigframe size: 1776 Mar 2 12:54:49.363784 kernel: rcu: Hierarchical SRCU implementation. Mar 2 12:54:49.363795 kernel: rcu: Max phase no-delay instances is 400. Mar 2 12:54:49.363805 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 12:54:49.363816 kernel: smp: Bringing up secondary CPUs ... Mar 2 12:54:49.363832 kernel: smpboot: x86: Booting SMP configuration: Mar 2 12:54:49.363844 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 12:54:49.363855 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 12:54:49.363867 kernel: smpboot: Max logical packages: 1 Mar 2 12:54:49.363878 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 2 12:54:49.363889 kernel: devtmpfs: initialized Mar 2 12:54:49.363949 kernel: x86/mm: Memory block size: 128MB Mar 2 12:54:49.363963 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 12:54:49.363974 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 12:54:49.363991 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 2 12:54:49.364003 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 12:54:49.364014 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 12:54:49.364026 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 12:54:49.364037 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 12:54:49.364049 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 12:54:49.364060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 12:54:49.364072 kernel: audit: initializing netlink subsys (disabled) Mar 2 12:54:49.364084 kernel: audit: type=2000 audit(1772456086.890:1): state=initialized audit_enabled=0 res=1 Mar 2 12:54:49.364100 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 12:54:49.364112 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 12:54:49.364124 kernel: cpuidle: using governor menu Mar 2 12:54:49.364136 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 12:54:49.364148 kernel: dca service started, version 1.12.1 Mar 2 12:54:49.364160 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 12:54:49.364172 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 12:54:49.364183 kernel: PCI: Using configuration type 1 for base access Mar 2 12:54:49.364199 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 12:54:49.364211 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 12:54:49.364222 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 12:54:49.364233 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 12:54:49.364245 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 12:54:49.364256 kernel: ACPI: Added _OSI(Module Device) Mar 2 12:54:49.364267 kernel: ACPI: Added _OSI(Processor Device) Mar 2 12:54:49.364278 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 12:54:49.364291 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 12:54:49.364308 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 12:54:49.364320 kernel: ACPI: Interpreter enabled Mar 2 12:54:49.364334 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 12:54:49.364345 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 12:54:49.364357 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 12:54:49.364368 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 12:54:49.364379 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 12:54:49.364391 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 12:54:49.364759 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 12:54:49.365060 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 12:54:49.365698 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 12:54:49.365716 kernel: PCI host bridge to bus 0000:00 Mar 2 12:54:49.366024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 12:54:49.366193 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 12:54:49.366420 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 12:54:49.366640 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 12:54:49.366778 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 12:54:49.366971 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 2 12:54:49.367153 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 12:54:49.367396 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 12:54:49.367697 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 12:54:49.367956 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 2 12:54:49.368187 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 2 12:54:49.368699 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 2 12:54:49.368863 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 2 12:54:49.369379 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 12:54:49.369654 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 12:54:49.369807 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 2 12:54:49.370033 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 2 12:54:49.370191 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 2 12:54:49.370381 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 12:54:49.370617 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 2 12:54:49.370807 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 2 12:54:49.370991 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 2 12:54:49.371145 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 12:54:49.371306 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 2 12:54:49.371608 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 2 12:54:49.371803 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 2 12:54:49.372009 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 2 12:54:49.372211 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 12:54:49.372572 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 12:54:49.372868 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 12:54:49.373256 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 2 12:54:49.373552 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 2 12:54:49.374100 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 12:54:49.374336 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 2 12:54:49.374356 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 12:54:49.374370 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 12:54:49.374384 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 12:54:49.374392 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 12:54:49.374405 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 12:54:49.374412 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 12:54:49.374419 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 12:54:49.374430 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 12:54:49.374443 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 12:54:49.374455 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 12:54:49.374468 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 12:54:49.374613 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 12:54:49.374626 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 12:54:49.374639 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 12:54:49.374646 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 12:54:49.374653 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 12:54:49.374660 kernel: iommu: Default domain type: Translated Mar 2 12:54:49.374667 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 12:54:49.374673 kernel: efivars: Registered efivars operations Mar 2 12:54:49.374680 kernel: PCI: Using ACPI for IRQ routing Mar 2 12:54:49.374687 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 12:54:49.374694 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 12:54:49.374704 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 2 12:54:49.374710 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 2 12:54:49.374717 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 2 12:54:49.374876 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 12:54:49.375075 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 12:54:49.375220 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 12:54:49.375229 kernel: vgaarb: loaded Mar 2 12:54:49.375236 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 12:54:49.375248 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 12:54:49.375255 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 12:54:49.375261 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 12:54:49.375268 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 12:54:49.375275 kernel: pnp: PnP ACPI init Mar 2 12:54:49.375460 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 12:54:49.375473 kernel: pnp: PnP ACPI: found 6 devices Mar 2 12:54:49.375534 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 12:54:49.375542 kernel: NET: Registered PF_INET protocol family Mar 2 12:54:49.375554 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 12:54:49.375560 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 12:54:49.375567 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 12:54:49.375574 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 12:54:49.375581 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 12:54:49.375588 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 12:54:49.375594 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:54:49.375601 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:54:49.375611 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 12:54:49.375617 kernel: NET: Registered PF_XDP protocol family Mar 2 12:54:49.375770 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 2 12:54:49.375955 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 2 12:54:49.376093 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 12:54:49.376225 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 12:54:49.376716 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 12:54:49.376997 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 12:54:49.377245 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 12:54:49.377386 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 2 12:54:49.377396 kernel: PCI: CLS 0 bytes, default 64 Mar 2 12:54:49.377403 kernel: Initialise system trusted keyrings Mar 2 12:54:49.377410 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 12:54:49.377417 kernel: Key type asymmetric registered Mar 2 12:54:49.377424 kernel: Asymmetric key parser 'x509' registered Mar 2 12:54:49.377431 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 12:54:49.377438 kernel: io scheduler mq-deadline registered Mar 2 12:54:49.377450 kernel: io scheduler kyber registered Mar 2 12:54:49.377456 kernel: io scheduler bfq registered Mar 2 12:54:49.377463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 12:54:49.377471 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 12:54:49.377546 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 12:54:49.377557 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 12:54:49.377564 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 12:54:49.377570 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 12:54:49.377577 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 12:54:49.377589 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 12:54:49.377596 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 12:54:49.377603 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 12:54:49.377767 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 12:54:49.377944 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 12:54:49.378088 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T12:54:48 UTC (1772456088) Mar 2 12:54:49.378224 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 12:54:49.378233 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 12:54:49.378244 kernel: efifb: probing for efifb Mar 2 12:54:49.378251 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 2 12:54:49.378258 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 2 12:54:49.378265 kernel: efifb: scrolling: redraw Mar 2 12:54:49.378272 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 2 12:54:49.378279 kernel: Console: switching to colour frame buffer device 100x37 Mar 2 12:54:49.378285 kernel: fb0: EFI VGA frame buffer device Mar 2 12:54:49.378292 kernel: pstore: Using crash dump compression: deflate Mar 2 12:54:49.378299 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 12:54:49.378308 kernel: NET: Registered PF_INET6 protocol family Mar 2 12:54:49.378315 kernel: Segment Routing with IPv6 Mar 2 12:54:49.378322 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 12:54:49.378329 kernel: NET: Registered PF_PACKET protocol family Mar 2 12:54:49.378336 kernel: Key type dns_resolver registered Mar 2 12:54:49.378343 kernel: IPI shorthand broadcast: enabled Mar 2 12:54:49.378372 kernel: sched_clock: Marking stable (1864226302, 394285775)->(2447698180, -189186103) Mar 2 12:54:49.378382 kernel: registered taskstats version 1 Mar 2 12:54:49.378389 kernel: Loading compiled-in X.509 certificates Mar 2 12:54:49.378399 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 12:54:49.378406 kernel: Key type .fscrypt registered Mar 2 12:54:49.378413 kernel: Key type fscrypt-provisioning registered Mar 2 12:54:49.378420 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 12:54:49.378427 kernel: ima: Allocated hash algorithm: sha1 Mar 2 12:54:49.378434 kernel: ima: No architecture policies found Mar 2 12:54:49.378441 kernel: clk: Disabling unused clocks Mar 2 12:54:49.378448 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 12:54:49.378455 kernel: Write protecting the kernel read-only data: 36864k Mar 2 12:54:49.378465 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 12:54:49.378472 kernel: Run /init as init process Mar 2 12:54:49.378554 kernel: with arguments: Mar 2 12:54:49.378562 kernel: /init Mar 2 12:54:49.378569 kernel: with environment: Mar 2 12:54:49.378576 kernel: HOME=/ Mar 2 12:54:49.378583 kernel: TERM=linux Mar 2 12:54:49.378629 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:54:49.378647 systemd[1]: Detected virtualization kvm. Mar 2 12:54:49.378655 systemd[1]: Detected architecture x86-64. Mar 2 12:54:49.378663 systemd[1]: Running in initrd. Mar 2 12:54:49.378670 systemd[1]: No hostname configured, using default hostname. Mar 2 12:54:49.378677 systemd[1]: Hostname set to . Mar 2 12:54:49.378703 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:54:49.378710 systemd[1]: Queued start job for default target initrd.target. Mar 2 12:54:49.378721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:49.378729 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:49.378754 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 12:54:49.378761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:54:49.378769 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 12:54:49.378812 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 12:54:49.378837 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 12:54:49.378846 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 12:54:49.378868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:49.378876 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:49.378924 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:54:49.378932 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:54:49.378943 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:54:49.378951 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:54:49.378958 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:54:49.378966 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:54:49.378991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 12:54:49.378999 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 12:54:49.379006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:49.379014 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:49.379022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:49.379033 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:54:49.379040 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 12:54:49.379048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:54:49.379055 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 12:54:49.379063 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 12:54:49.379071 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:54:49.379078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:54:49.379086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:49.379121 systemd-journald[195]: Collecting audit messages is disabled. Mar 2 12:54:49.379139 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 12:54:49.379147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:49.379155 systemd-journald[195]: Journal started Mar 2 12:54:49.379175 systemd-journald[195]: Runtime Journal (/run/log/journal/9c067aeedd33419b8592c52f13db1b44) is 6.0M, max 48.3M, 42.2M free. Mar 2 12:54:49.382842 systemd-modules-load[196]: Inserted module 'overlay' Mar 2 12:54:49.390432 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:54:49.391183 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 12:54:49.396690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:49.415664 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:49.425606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:54:49.436697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:54:49.448806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 12:54:49.448832 kernel: Bridge firewalling registered Mar 2 12:54:49.444888 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 2 12:54:49.451962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:49.459981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:49.468081 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:49.477720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:54:49.511087 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 12:54:49.516981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:54:49.519771 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:54:49.538676 dracut-cmdline[223]: dracut-dracut-053 Mar 2 12:54:49.541198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:49.548651 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:49.563829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:49.580648 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:54:49.620249 systemd-resolved[264]: Positive Trust Anchors: Mar 2 12:54:49.621239 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:54:49.621268 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:54:49.638399 systemd-resolved[264]: Defaulting to hostname 'linux'. Mar 2 12:54:49.640392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:54:49.672620 kernel: SCSI subsystem initialized Mar 2 12:54:49.661125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:54:49.682522 kernel: Loading iSCSI transport class v2.0-870. Mar 2 12:54:49.695579 kernel: iscsi: registered transport (tcp) Mar 2 12:54:49.721623 kernel: iscsi: registered transport (qla4xxx) Mar 2 12:54:49.721693 kernel: QLogic iSCSI HBA Driver Mar 2 12:54:49.848657 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 12:54:49.890166 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 12:54:49.940607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 12:54:49.940652 kernel: device-mapper: uevent: version 1.0.3 Mar 2 12:54:49.943928 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 12:54:49.989626 kernel: raid6: avx2x4 gen() 31757 MB/s Mar 2 12:54:50.007599 kernel: raid6: avx2x2 gen() 29547 MB/s Mar 2 12:54:50.029163 kernel: raid6: avx2x1 gen() 23450 MB/s Mar 2 12:54:50.029210 kernel: raid6: using algorithm avx2x4 gen() 31757 MB/s Mar 2 12:54:50.051417 kernel: raid6: .... xor() 4382 MB/s, rmw enabled Mar 2 12:54:50.051628 kernel: raid6: using avx2x2 recovery algorithm Mar 2 12:54:50.080640 kernel: xor: automatically using best checksumming function avx Mar 2 12:54:50.256688 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 12:54:50.272437 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:54:50.286128 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:50.302256 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 2 12:54:50.309643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:50.335198 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 12:54:50.369302 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 2 12:54:50.412677 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:54:50.434108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:54:50.544993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:50.562734 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 12:54:50.589528 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 12:54:50.596255 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 12:54:50.600425 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:54:50.623006 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 12:54:50.627287 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 12:54:50.612187 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:50.645774 kernel: GPT:9289727 != 19775487 Mar 2 12:54:50.645950 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 12:54:50.645982 kernel: GPT:9289727 != 19775487 Mar 2 12:54:50.646008 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 12:54:50.646033 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:50.623019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:54:50.645433 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 12:54:50.662580 kernel: libata version 3.00 loaded. Mar 2 12:54:50.664453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:54:50.669970 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 12:54:50.665012 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:50.678638 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:50.682419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:54:50.682647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:50.690348 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:50.710043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:50.722973 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 12:54:50.723199 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 12:54:50.723218 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Mar 2 12:54:50.711567 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:54:50.739709 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (462) Mar 2 12:54:50.739740 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 12:54:50.740089 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 12:54:50.742972 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 12:54:50.747541 kernel: AES CTR mode by8 optimization enabled Mar 2 12:54:50.757570 kernel: scsi host0: ahci Mar 2 12:54:50.757780 kernel: scsi host1: ahci Mar 2 12:54:50.758044 kernel: scsi host2: ahci Mar 2 12:54:50.758987 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 12:54:50.767175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:50.771844 kernel: scsi host3: ahci Mar 2 12:54:50.778266 kernel: scsi host4: ahci Mar 2 12:54:50.778538 kernel: scsi host5: ahci Mar 2 12:54:50.781546 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 12:54:50.802832 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 2 12:54:50.802858 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 2 12:54:50.802870 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 2 12:54:50.802880 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 2 12:54:50.802891 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 2 12:54:50.802937 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 2 12:54:50.829623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:54:50.843026 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 12:54:50.852034 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 12:54:50.870700 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 12:54:50.879365 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:50.890734 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:50.890764 disk-uuid[554]: Primary Header is updated. Mar 2 12:54:50.890764 disk-uuid[554]: Secondary Entries is updated. Mar 2 12:54:50.890764 disk-uuid[554]: Secondary Header is updated. Mar 2 12:54:50.902139 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:50.921314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:51.096597 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:51.099888 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:51.102538 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:51.105580 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:51.105605 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 12:54:51.108690 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:51.114199 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 12:54:51.114249 kernel: ata3.00: applying bridge limits Mar 2 12:54:51.114622 kernel: ata3.00: configured for UDMA/100 Mar 2 12:54:51.123556 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 12:54:51.175733 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 12:54:51.176232 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 12:54:51.189647 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 12:54:51.897597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:51.899094 disk-uuid[556]: The operation has completed successfully. Mar 2 12:54:51.937662 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 12:54:51.937834 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 12:54:51.961748 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 12:54:51.975587 sh[593]: Success Mar 2 12:54:52.000581 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 12:54:52.055133 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 12:54:52.072787 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 12:54:52.085708 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 12:54:52.110833 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 12:54:52.110878 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:52.110894 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 12:54:52.110965 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 12:54:52.110982 kernel: BTRFS info (device dm-0): using free space tree Mar 2 12:54:52.121833 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 12:54:52.126581 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 12:54:52.143821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 12:54:52.151357 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 12:54:52.167678 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:52.167699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:52.167713 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:52.173609 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:52.186070 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 12:54:52.193324 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:52.199043 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 12:54:52.210684 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 12:54:52.331864 ignition[687]: Ignition 2.19.0 Mar 2 12:54:52.331897 ignition[687]: Stage: fetch-offline Mar 2 12:54:52.331971 ignition[687]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:52.331983 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:52.332064 ignition[687]: parsed url from cmdline: "" Mar 2 12:54:52.344820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:54:52.332069 ignition[687]: no config URL provided Mar 2 12:54:52.332075 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 12:54:52.332084 ignition[687]: no config at "/usr/lib/ignition/user.ign" Mar 2 12:54:52.332112 ignition[687]: op(1): [started] loading QEMU firmware config module Mar 2 12:54:52.360678 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:54:52.332118 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 12:54:52.345640 ignition[687]: op(1): [finished] loading QEMU firmware config module Mar 2 12:54:52.345662 ignition[687]: QEMU firmware config was not found. Ignoring... Mar 2 12:54:52.405136 systemd-networkd[782]: lo: Link UP Mar 2 12:54:52.405165 systemd-networkd[782]: lo: Gained carrier Mar 2 12:54:52.407696 systemd-networkd[782]: Enumeration completed Mar 2 12:54:52.408753 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:54:52.411284 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:52.411289 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:54:52.413214 systemd-networkd[782]: eth0: Link UP Mar 2 12:54:52.413219 systemd-networkd[782]: eth0: Gained carrier Mar 2 12:54:52.413227 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:52.415102 systemd[1]: Reached target network.target - Network. Mar 2 12:54:52.457734 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:54:52.620330 ignition[687]: parsing config with SHA512: 9003dcccd8b020ce8abd4cd00bcdb0244a69b76347eec1bcd5ee0dd73f8d7a832c5425f26e4a32513bb75c90baab8838410332c26e4d1bb70d069de96d330930 Mar 2 12:54:52.626436 unknown[687]: fetched base config from "system" Mar 2 12:54:52.626475 unknown[687]: fetched user config from "qemu" Mar 2 12:54:52.627758 ignition[687]: fetch-offline: fetch-offline passed Mar 2 12:54:52.629068 ignition[687]: Ignition finished successfully Mar 2 12:54:52.645962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:54:52.651763 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 12:54:52.672265 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 12:54:52.857999 ignition[786]: Ignition 2.19.0 Mar 2 12:54:52.858078 ignition[786]: Stage: kargs Mar 2 12:54:52.858261 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:52.858275 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:52.864370 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 12:54:52.859092 ignition[786]: kargs: kargs passed Mar 2 12:54:52.859143 ignition[786]: Ignition finished successfully Mar 2 12:54:52.880125 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 12:54:52.938730 kernel: hrtimer: interrupt took 2911076 ns Mar 2 12:54:52.953238 ignition[795]: Ignition 2.19.0 Mar 2 12:54:52.953312 ignition[795]: Stage: disks Mar 2 12:54:52.959180 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 12:54:52.953666 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:52.964079 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 12:54:52.953684 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:52.972251 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 12:54:52.955067 ignition[795]: disks: disks passed Mar 2 12:54:52.976890 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:54:52.955138 ignition[795]: Ignition finished successfully Mar 2 12:54:52.980856 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:54:52.983898 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:54:53.000682 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 12:54:53.029196 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 12:54:53.034300 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 12:54:53.055012 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 12:54:53.192541 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 12:54:53.193181 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 12:54:53.197147 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 12:54:53.212740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:54:53.231816 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 2 12:54:53.231846 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:53.217185 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 12:54:53.250712 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:53.250733 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:53.250744 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:53.244373 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 12:54:53.244420 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 12:54:53.244545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:54:53.257817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:54:53.268797 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 12:54:53.287154 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 12:54:53.349829 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 12:54:53.357668 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 2 12:54:53.365581 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 12:54:53.370040 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 12:54:53.498884 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 12:54:53.515690 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 12:54:53.522698 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 12:54:53.532863 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:53.526613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 12:54:53.576087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 12:54:53.582139 ignition[926]: INFO : Ignition 2.19.0 Mar 2 12:54:53.582139 ignition[926]: INFO : Stage: mount Mar 2 12:54:53.582139 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:53.582139 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:53.582139 ignition[926]: INFO : mount: mount passed Mar 2 12:54:53.582139 ignition[926]: INFO : Ignition finished successfully Mar 2 12:54:53.582292 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 12:54:53.597674 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 12:54:53.918853 systemd-networkd[782]: eth0: Gained IPv6LL Mar 2 12:54:54.211753 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:54:54.229272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 2 12:54:54.229303 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:54.229317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:54.235013 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:54.241556 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:54.244076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:54:54.268163 ignition[957]: INFO : Ignition 2.19.0 Mar 2 12:54:54.268163 ignition[957]: INFO : Stage: files Mar 2 12:54:54.272817 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:54.272817 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:54.272817 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 2 12:54:54.282935 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 12:54:54.282935 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 12:54:54.291727 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 12:54:54.291727 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 12:54:54.291727 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 12:54:54.291727 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 2 12:54:54.291727 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 2 12:54:54.291727 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:54:54.291727 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 12:54:54.287434 unknown[957]: wrote ssh authorized keys file for user: core Mar 2 12:54:54.361087 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 12:54:54.466121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:54:54.466121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 12:54:54.477398 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 12:54:54.482642 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:54:54.488119 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:54:54.493426 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:54:54.498814 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:54:54.504087 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:54:54.509568 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:54:54.515473 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:54:54.521124 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:54:54.526403 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:54:54.534241 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:54:54.534241 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:54:54.548426 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 2 12:54:54.844579 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 12:54:55.488104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:54:55.488104 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 2 12:54:55.498832 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 2 12:54:55.505907 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 2 12:54:55.505907 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 2 12:54:55.505907 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 2 12:54:55.522810 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:54:55.528735 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:54:55.528735 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 2 12:54:55.528735 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 2 12:54:55.541713 ignition[957]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:54:55.547784 ignition[957]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:54:55.547784 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 2 12:54:55.547784 ignition[957]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 12:54:55.595621 ignition[957]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:54:55.601461 ignition[957]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:54:55.606259 ignition[957]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 12:54:55.606259 ignition[957]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 2 12:54:55.614360 ignition[957]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 12:54:55.621982 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:54:55.627106 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:54:55.632115 ignition[957]: INFO : files: files passed Mar 2 12:54:55.634377 ignition[957]: INFO : Ignition finished successfully Mar 2 12:54:55.639225 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 12:54:55.655683 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 12:54:55.659598 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 12:54:55.665951 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 12:54:55.666110 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 12:54:55.679642 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 12:54:55.683676 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:55.683676 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:55.680723 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:54:55.703722 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:55.687876 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 12:54:55.708723 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 12:54:55.740775 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 12:54:55.740971 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 12:54:55.747472 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 12:54:55.754046 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 12:54:55.757095 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 12:54:55.774714 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 12:54:55.790969 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:54:55.806697 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 12:54:55.821119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:54:55.824763 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:55.831705 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 12:54:55.837764 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 12:54:55.837899 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:54:55.844638 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 12:54:55.850425 systemd[1]: Stopped target basic.target - Basic System. Mar 2 12:54:55.856563 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 12:54:55.859899 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:54:55.866210 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 12:54:55.872167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 12:54:55.875307 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:54:55.881956 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 12:54:55.888386 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 12:54:55.891537 systemd[1]: Stopped target swap.target - Swaps. Mar 2 12:54:55.897039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 12:54:55.897179 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:54:55.903550 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:55.910181 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:55.913738 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 12:54:55.914041 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:55.921015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 12:54:55.921116 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 12:54:55.927249 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 12:54:55.927385 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:54:55.933010 systemd[1]: Stopped target paths.target - Path Units. Mar 2 12:54:55.938440 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 12:54:55.938701 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:55.945289 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 12:54:56.021779 ignition[1011]: INFO : Ignition 2.19.0 Mar 2 12:54:56.021779 ignition[1011]: INFO : Stage: umount Mar 2 12:54:56.021779 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:56.021779 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:56.021779 ignition[1011]: INFO : umount: umount passed Mar 2 12:54:56.021779 ignition[1011]: INFO : Ignition finished successfully Mar 2 12:54:55.948052 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 12:54:55.954088 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 12:54:55.954203 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:54:55.960556 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 12:54:55.960670 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:54:55.966716 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 12:54:55.966847 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:54:55.975212 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 12:54:55.975332 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 12:54:55.996720 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 12:54:56.000436 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 12:54:56.005317 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 12:54:56.005806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:56.013685 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 12:54:56.013835 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:54:56.024037 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 12:54:56.024177 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 12:54:56.029685 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 12:54:56.031757 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 12:54:56.031891 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 12:54:56.037827 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 12:54:56.037994 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 12:54:56.046289 systemd[1]: Stopped target network.target - Network. Mar 2 12:54:56.048967 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 12:54:56.049026 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 12:54:56.054703 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 12:54:56.054757 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 12:54:56.060530 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 12:54:56.060585 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 12:54:56.063654 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 12:54:56.063716 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 12:54:56.067158 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 12:54:56.067213 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 12:54:56.070148 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 12:54:56.082270 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 12:54:56.088651 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 2 12:54:56.092617 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 12:54:56.092849 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 12:54:56.099582 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 12:54:56.099656 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:56.291313 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 2 12:54:56.116664 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 12:54:56.120880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 12:54:56.120971 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:54:56.128336 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:56.135018 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 12:54:56.135283 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 12:54:56.143866 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 12:54:56.143974 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:56.147713 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 12:54:56.147768 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:56.154731 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 12:54:56.154787 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:54:56.162636 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 12:54:56.162841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:56.169676 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 12:54:56.169808 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 12:54:56.177083 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 12:54:56.177149 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:56.182066 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 12:54:56.182115 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:56.188179 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 12:54:56.188237 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:54:56.194122 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 12:54:56.194179 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 12:54:56.200093 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:54:56.200153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:56.223757 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 12:54:56.228662 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 12:54:56.228722 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:56.232678 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 12:54:56.232751 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:56.239738 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 12:54:56.239796 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:56.239968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:54:56.240031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:56.241021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 12:54:56.241174 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 12:54:56.242229 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 12:54:56.244249 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 12:54:56.259985 systemd[1]: Switching root. Mar 2 12:54:56.428962 systemd-journald[195]: Journal stopped Mar 2 12:54:57.757005 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 12:54:57.757078 kernel: SELinux: policy capability open_perms=1 Mar 2 12:54:57.757092 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 12:54:57.757103 kernel: SELinux: policy capability always_check_network=0 Mar 2 12:54:57.757116 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 12:54:57.757127 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 12:54:57.757144 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 12:54:57.757155 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 12:54:57.757171 kernel: audit: type=1403 audit(1772456096.574:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 12:54:57.757184 systemd[1]: Successfully loaded SELinux policy in 70.307ms. Mar 2 12:54:57.757210 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.040ms. Mar 2 12:54:57.757223 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:54:57.757235 systemd[1]: Detected virtualization kvm. Mar 2 12:54:57.757247 systemd[1]: Detected architecture x86-64. Mar 2 12:54:57.757259 systemd[1]: Detected first boot. Mar 2 12:54:57.757271 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:54:57.757283 zram_generator::config[1071]: No configuration found. Mar 2 12:54:57.757296 systemd[1]: Populated /etc with preset unit settings. Mar 2 12:54:57.757311 systemd[1]: Queued start job for default target multi-user.target. Mar 2 12:54:57.757323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 12:54:57.757336 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 12:54:57.757348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 12:54:57.757360 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 12:54:57.757372 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 12:54:57.757385 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 12:54:57.757400 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 12:54:57.757412 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 12:54:57.757429 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 12:54:57.757441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:57.757453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:57.757465 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 12:54:57.757477 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 12:54:57.759209 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 12:54:57.759224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:54:57.759237 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 12:54:57.759249 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:57.759266 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 12:54:57.759277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:57.759289 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:54:57.759305 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:54:57.759317 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:54:57.759328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 12:54:57.759339 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 12:54:57.759351 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 12:54:57.759366 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 12:54:57.759378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:57.759389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:57.759401 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:57.759414 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 12:54:57.759425 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 12:54:57.759437 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 12:54:57.759448 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 12:54:57.759459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:57.759474 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 12:54:57.759538 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 12:54:57.759551 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 12:54:57.759562 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 12:54:57.759574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:54:57.759585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:54:57.759597 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 12:54:57.759609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:54:57.759620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:54:57.759636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:54:57.759647 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 12:54:57.759659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:54:57.759670 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 12:54:57.759682 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 2 12:54:57.759695 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 2 12:54:57.759706 kernel: fuse: init (API version 7.39) Mar 2 12:54:57.759722 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:54:57.759734 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:54:57.759745 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:54:57.759757 kernel: ACPI: bus type drm_connector registered Mar 2 12:54:57.759768 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 12:54:57.759801 systemd-journald[1170]: Collecting audit messages is disabled. Mar 2 12:54:57.759829 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:54:57.759844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:57.759856 systemd-journald[1170]: Journal started Mar 2 12:54:57.759875 systemd-journald[1170]: Runtime Journal (/run/log/journal/9c067aeedd33419b8592c52f13db1b44) is 6.0M, max 48.3M, 42.2M free. Mar 2 12:54:57.767556 kernel: loop: module loaded Mar 2 12:54:57.781410 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:54:57.783883 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 12:54:57.788201 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 12:54:57.791810 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 12:54:57.795064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 12:54:57.798656 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 12:54:57.802225 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 12:54:57.805767 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 12:54:57.809830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:57.814168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 12:54:57.814399 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 12:54:57.818435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:54:57.818722 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:54:57.822836 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:54:57.823102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:54:57.826906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:54:57.827155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:54:57.831581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 12:54:57.831803 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 12:54:57.835723 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:54:57.836020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:54:57.839912 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:57.844147 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:54:57.848471 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 12:54:57.863898 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:54:57.875728 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 12:54:57.880289 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 12:54:57.883596 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 12:54:57.885696 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 12:54:57.902819 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 12:54:57.907020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:54:57.910214 systemd-journald[1170]: Time spent on flushing to /var/log/journal/9c067aeedd33419b8592c52f13db1b44 is 13.752ms for 969 entries. Mar 2 12:54:57.910214 systemd-journald[1170]: System Journal (/var/log/journal/9c067aeedd33419b8592c52f13db1b44) is 8.0M, max 195.6M, 187.6M free. Mar 2 12:54:57.939740 systemd-journald[1170]: Received client request to flush runtime journal. Mar 2 12:54:57.912700 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 12:54:57.918654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:54:57.921115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:54:57.927420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:54:57.942656 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:57.946978 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 12:54:57.950731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 12:54:57.962113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 12:54:57.966770 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 12:54:57.974909 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 12:54:57.988427 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 12:54:57.990637 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 2 12:54:57.990671 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 2 12:54:57.995099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:58.002375 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:58.454169 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 12:54:58.459963 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 2 12:54:58.490643 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 12:54:58.505007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:54:58.544671 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Mar 2 12:54:58.544722 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Mar 2 12:54:58.551575 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:58.955652 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 12:54:58.972680 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:59.002906 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Mar 2 12:54:59.031053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:59.045750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:54:59.064653 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 12:54:59.079823 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 2 12:54:59.138610 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1242) Mar 2 12:54:59.145997 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 12:54:59.194563 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 12:54:59.201570 kernel: ACPI: button: Power Button [PWRF] Mar 2 12:54:59.233829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:54:59.244177 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 12:54:59.244469 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 12:54:59.269456 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 12:54:59.269828 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 12:54:59.296456 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 12:54:59.304814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:59.309525 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 12:54:59.322763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:54:59.323153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:59.341747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:59.353594 systemd-networkd[1244]: lo: Link UP Mar 2 12:54:59.353954 systemd-networkd[1244]: lo: Gained carrier Mar 2 12:54:59.356307 systemd-networkd[1244]: Enumeration completed Mar 2 12:54:59.357349 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:59.357406 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:54:59.357649 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:54:59.358860 systemd-networkd[1244]: eth0: Link UP Mar 2 12:54:59.358948 systemd-networkd[1244]: eth0: Gained carrier Mar 2 12:54:59.359016 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:59.363978 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 12:54:59.513609 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:54:59.519735 kernel: kvm_amd: TSC scaling supported Mar 2 12:54:59.519802 kernel: kvm_amd: Nested Virtualization enabled Mar 2 12:54:59.519819 kernel: kvm_amd: Nested Paging enabled Mar 2 12:54:59.521803 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 12:54:59.525789 kernel: kvm_amd: PMU virtualization is disabled Mar 2 12:54:59.580600 kernel: EDAC MC: Ver: 3.0.0 Mar 2 12:54:59.583688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:59.618753 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 12:54:59.632783 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 12:54:59.643459 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:54:59.680724 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 12:54:59.687257 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:59.697774 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 12:54:59.706747 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:54:59.801595 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 12:54:59.847794 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 12:54:59.856468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 12:54:59.856622 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:54:59.867727 systemd[1]: Reached target machines.target - Containers. Mar 2 12:54:59.877583 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 12:54:59.904669 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 12:54:59.910289 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 12:54:59.913752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:54:59.915077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 12:54:59.924704 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 12:54:59.930183 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 12:54:59.937770 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 12:54:59.947017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 12:54:59.953542 kernel: loop0: detected capacity change from 0 to 140768 Mar 2 12:54:59.963575 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 12:54:59.964823 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 12:54:59.985580 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 12:55:00.019534 kernel: loop1: detected capacity change from 0 to 142488 Mar 2 12:55:00.060569 kernel: loop2: detected capacity change from 0 to 228704 Mar 2 12:55:00.321875 kernel: loop3: detected capacity change from 0 to 140768 Mar 2 12:55:00.362538 kernel: loop4: detected capacity change from 0 to 142488 Mar 2 12:55:00.383566 kernel: loop5: detected capacity change from 0 to 228704 Mar 2 12:55:00.395887 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 12:55:00.396962 (sd-merge)[1311]: Merged extensions into '/usr'. Mar 2 12:55:00.403107 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 12:55:00.403145 systemd[1]: Reloading... Mar 2 12:55:00.449199 systemd-networkd[1244]: eth0: Gained IPv6LL Mar 2 12:55:00.498569 zram_generator::config[1349]: No configuration found. Mar 2 12:55:00.634976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:55:00.700084 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 12:55:00.702757 systemd[1]: Reloading finished in 298 ms. Mar 2 12:55:00.724414 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 12:55:00.729750 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 12:55:00.733859 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 12:55:00.757835 systemd[1]: Starting ensure-sysext.service... Mar 2 12:55:00.762007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:55:00.770864 systemd[1]: Reloading requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Mar 2 12:55:00.770911 systemd[1]: Reloading... Mar 2 12:55:00.827733 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 12:55:00.828212 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 12:55:00.829415 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 12:55:00.829821 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Mar 2 12:55:00.829981 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Mar 2 12:55:00.834186 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:55:00.834225 systemd-tmpfiles[1386]: Skipping /boot Mar 2 12:55:00.845537 zram_generator::config[1414]: No configuration found. Mar 2 12:55:00.849971 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:55:00.850007 systemd-tmpfiles[1386]: Skipping /boot Mar 2 12:55:00.976910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:55:01.046311 systemd[1]: Reloading finished in 274 ms. Mar 2 12:55:01.065797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:55:01.088077 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:01.093306 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 12:55:01.098281 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 12:55:01.106696 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:55:01.112461 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 12:55:01.120383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:01.120621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:55:01.122976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:55:01.129813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:55:01.144586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:55:01.149222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:55:01.149317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:01.152420 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 12:55:01.159337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:55:01.159892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:55:01.164661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:55:01.164898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:55:01.170257 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:55:01.170586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:55:01.186536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:01.186766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:55:01.195977 augenrules[1493]: No rules Mar 2 12:55:01.197918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:55:01.206807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:55:01.213783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:55:01.217194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:55:01.227871 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 12:55:01.232065 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:01.235017 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:01.237443 systemd-resolved[1463]: Positive Trust Anchors: Mar 2 12:55:01.237566 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:55:01.237615 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:55:01.240291 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 12:55:01.242388 systemd-resolved[1463]: Defaulting to hostname 'linux'. Mar 2 12:55:01.245440 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:55:01.250271 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 12:55:01.254880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:55:01.255163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:55:01.259443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:55:01.259797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:55:01.264534 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:55:01.264817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:55:01.269313 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 12:55:01.284339 systemd[1]: Reached target network.target - Network. Mar 2 12:55:01.287645 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 12:55:01.291389 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:55:01.295426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:01.295796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:55:01.312972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:55:01.317804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:55:01.322137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:55:01.329904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:55:01.333182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:55:01.333441 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 12:55:01.333724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:01.335625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:55:01.335882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:55:01.340243 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:55:01.340594 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:55:01.346068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:55:01.346310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:55:01.351281 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:55:01.351634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:55:01.357915 systemd[1]: Finished ensure-sysext.service. Mar 2 12:55:01.367267 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:55:01.367413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:55:01.379871 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 12:55:01.491976 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 12:55:02.083016 systemd-resolved[1463]: Clock change detected. Flushing caches. Mar 2 12:55:02.083052 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 12:55:02.083097 systemd-timesyncd[1531]: Initial clock synchronization to Mon 2026-03-02 12:55:02.082906 UTC. Mar 2 12:55:02.087073 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:55:02.091064 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 12:55:02.095150 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 12:55:02.099171 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 12:55:02.103321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 12:55:02.103356 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:55:02.109065 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 12:55:02.113556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 12:55:02.123576 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 12:55:02.131736 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:55:02.139765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 12:55:02.169768 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 12:55:02.181680 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 12:55:02.189540 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 12:55:02.194299 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:55:02.198640 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:55:02.208135 systemd[1]: System is tainted: cgroupsv1 Mar 2 12:55:02.208296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:55:02.208465 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:55:02.236049 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 12:55:02.249597 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 12:55:02.266282 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 12:55:02.274580 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 12:55:02.318805 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 12:55:02.324658 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 12:55:02.325637 jq[1540]: false Mar 2 12:55:02.342507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:02.372768 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 12:55:02.383918 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 12:55:02.391198 extend-filesystems[1542]: Found loop3 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found loop4 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found loop5 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found sr0 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda1 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda2 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda3 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found usr Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda4 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda6 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda7 Mar 2 12:55:02.391198 extend-filesystems[1542]: Found vda9 Mar 2 12:55:02.391198 extend-filesystems[1542]: Checking size of /dev/vda9 Mar 2 12:55:02.481034 extend-filesystems[1542]: Resized partition /dev/vda9 Mar 2 12:55:02.391478 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 12:55:02.414320 dbus-daemon[1539]: [system] SELinux support is enabled Mar 2 12:55:02.496188 extend-filesystems[1571]: resize2fs 1.47.1 (20-May-2024) Mar 2 12:55:02.396565 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 12:55:02.411934 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 12:55:02.428662 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 12:55:02.432215 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 12:55:02.437510 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 12:55:02.453604 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 12:55:02.458949 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 12:55:02.522890 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 12:55:02.522953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1257) Mar 2 12:55:02.512950 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 12:55:02.516939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 12:55:02.518124 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 12:55:02.518520 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 12:55:02.529603 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 12:55:02.529961 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 12:55:02.565097 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 12:55:02.582446 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 12:55:02.582819 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 12:55:02.595004 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 12:55:02.607620 jq[1570]: true Mar 2 12:55:02.622906 update_engine[1565]: I20260302 12:55:02.618233 1565 main.cc:92] Flatcar Update Engine starting Mar 2 12:55:02.629555 update_engine[1565]: I20260302 12:55:02.629295 1565 update_check_scheduler.cc:74] Next update check in 11m11s Mar 2 12:55:02.666451 tar[1577]: linux-amd64/LICENSE Mar 2 12:55:02.666451 tar[1577]: linux-amd64/helm Mar 2 12:55:02.654543 systemd[1]: Started update-engine.service - Update Engine. Mar 2 12:55:02.664602 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 12:55:02.664698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 12:55:02.664724 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 12:55:02.670928 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 12:55:02.675194 jq[1594]: true Mar 2 12:55:02.670957 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 12:55:02.676942 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 12:55:02.686658 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 12:55:02.690686 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 12:55:02.722363 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 12:55:02.722363 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 12:55:02.722363 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 12:55:02.748549 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Mar 2 12:55:02.726341 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 12:55:02.726732 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 12:55:03.238032 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 12:55:03.287475 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 12:55:03.301058 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 12:55:03.302039 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 12:55:03.308325 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Mar 2 12:55:03.306966 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 12:55:03.309051 systemd-logind[1562]: New seat seat0. Mar 2 12:55:03.329105 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 12:55:03.336558 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 12:55:03.338745 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 12:55:03.340613 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 12:55:03.345029 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 12:55:03.347725 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 12:55:03.364006 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 12:55:03.389911 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 12:55:03.407099 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 12:55:03.429038 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 12:55:03.433669 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 12:55:03.747035 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 12:55:03.765703 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:40916.service - OpenSSH per-connection server daemon (10.0.0.1:40916). Mar 2 12:55:04.269433 containerd[1586]: time="2026-03-02T12:55:04.267937112Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 12:55:04.438086 containerd[1586]: time="2026-03-02T12:55:04.437865799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:04.442243 containerd[1586]: time="2026-03-02T12:55:04.442172661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:04.442243 containerd[1586]: time="2026-03-02T12:55:04.442222414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 12:55:04.442243 containerd[1586]: time="2026-03-02T12:55:04.442238714Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 12:55:04.442669 containerd[1586]: time="2026-03-02T12:55:04.442608555Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 12:55:04.442700 containerd[1586]: time="2026-03-02T12:55:04.442651255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:04.442872 containerd[1586]: time="2026-03-02T12:55:04.442808328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:04.442900 containerd[1586]: time="2026-03-02T12:55:04.442871065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:04.443289 containerd[1586]: time="2026-03-02T12:55:04.443233241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:04.443289 containerd[1586]: time="2026-03-02T12:55:04.443271753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:04.443369 containerd[1586]: time="2026-03-02T12:55:04.443304475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:04.443437 containerd[1586]: time="2026-03-02T12:55:04.443365779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:04.443715 containerd[1586]: time="2026-03-02T12:55:04.443655991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:04.444332 containerd[1586]: time="2026-03-02T12:55:04.444259658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:04.447759 containerd[1586]: time="2026-03-02T12:55:04.447695845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:04.447801 containerd[1586]: time="2026-03-02T12:55:04.447748523Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 12:55:04.448098 containerd[1586]: time="2026-03-02T12:55:04.448035509Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 12:55:04.448551 containerd[1586]: time="2026-03-02T12:55:04.448484808Z" level=info msg="metadata content store policy set" policy=shared Mar 2 12:55:04.458502 containerd[1586]: time="2026-03-02T12:55:04.458467725Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 12:55:04.458886 containerd[1586]: time="2026-03-02T12:55:04.458864316Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 12:55:04.459080 containerd[1586]: time="2026-03-02T12:55:04.459064199Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 12:55:04.459140 containerd[1586]: time="2026-03-02T12:55:04.459127036Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 12:55:04.459236 containerd[1586]: time="2026-03-02T12:55:04.459222876Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 12:55:04.459709 containerd[1586]: time="2026-03-02T12:55:04.459688084Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 12:55:04.460754 containerd[1586]: time="2026-03-02T12:55:04.460732575Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 12:55:04.461144 containerd[1586]: time="2026-03-02T12:55:04.461082138Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 12:55:04.461292 containerd[1586]: time="2026-03-02T12:55:04.461194037Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 12:55:04.461481 containerd[1586]: time="2026-03-02T12:55:04.461457048Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 12:55:04.461574 containerd[1586]: time="2026-03-02T12:55:04.461551785Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.461696 containerd[1586]: time="2026-03-02T12:55:04.461677530Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.461823 containerd[1586]: time="2026-03-02T12:55:04.461806470Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.461914 containerd[1586]: time="2026-03-02T12:55:04.461901037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.461962 containerd[1586]: time="2026-03-02T12:55:04.461949758Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.462005 containerd[1586]: time="2026-03-02T12:55:04.461994762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.462046 containerd[1586]: time="2026-03-02T12:55:04.462035868Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.462118 containerd[1586]: time="2026-03-02T12:55:04.462096331Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 12:55:04.462448 containerd[1586]: time="2026-03-02T12:55:04.462427009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.462509 containerd[1586]: time="2026-03-02T12:55:04.462497561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.462556 containerd[1586]: time="2026-03-02T12:55:04.462545360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.462775 containerd[1586]: time="2026-03-02T12:55:04.462639235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.462775 containerd[1586]: time="2026-03-02T12:55:04.462659673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.462775 containerd[1586]: time="2026-03-02T12:55:04.462672567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463120654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463144629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463157583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463202677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463215551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463228005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463311230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463333311Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463453336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463513688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.463688 containerd[1586]: time="2026-03-02T12:55:04.463529758Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 12:55:04.464048 containerd[1586]: time="2026-03-02T12:55:04.463966784Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 12:55:04.464518 containerd[1586]: time="2026-03-02T12:55:04.464497045Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 12:55:04.465435 containerd[1586]: time="2026-03-02T12:55:04.464562116Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 12:55:04.465435 containerd[1586]: time="2026-03-02T12:55:04.464605247Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 12:55:04.465435 containerd[1586]: time="2026-03-02T12:55:04.464617509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.465435 containerd[1586]: time="2026-03-02T12:55:04.464649920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 12:55:04.465435 containerd[1586]: time="2026-03-02T12:55:04.464696917Z" level=info msg="NRI interface is disabled by configuration." Mar 2 12:55:04.465435 containerd[1586]: time="2026-03-02T12:55:04.464718508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 12:55:04.466052 containerd[1586]: time="2026-03-02T12:55:04.465946651Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 12:55:04.468543 containerd[1586]: time="2026-03-02T12:55:04.466978809Z" level=info msg="Connect containerd service" Mar 2 12:55:04.468543 containerd[1586]: time="2026-03-02T12:55:04.467107720Z" level=info msg="using legacy CRI server" Mar 2 12:55:04.468543 containerd[1586]: time="2026-03-02T12:55:04.467118469Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 12:55:04.468543 containerd[1586]: time="2026-03-02T12:55:04.467509810Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 12:55:04.469454 containerd[1586]: time="2026-03-02T12:55:04.469366066Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 12:55:04.470135 containerd[1586]: time="2026-03-02T12:55:04.469950999Z" level=info msg="Start subscribing containerd event" Mar 2 12:55:04.470722 containerd[1586]: time="2026-03-02T12:55:04.470704656Z" level=info msg="Start recovering state" Mar 2 12:55:04.471115 containerd[1586]: time="2026-03-02T12:55:04.471098191Z" level=info msg="Start event monitor" Mar 2 12:55:04.471491 containerd[1586]: time="2026-03-02T12:55:04.471474945Z" level=info msg="Start snapshots syncer" Mar 2 12:55:04.471588 containerd[1586]: time="2026-03-02T12:55:04.471574350Z" level=info msg="Start cni network conf syncer for default" Mar 2 12:55:04.471657 containerd[1586]: time="2026-03-02T12:55:04.471644331Z" level=info msg="Start streaming server" Mar 2 12:55:04.472488 containerd[1586]: time="2026-03-02T12:55:04.471799410Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 12:55:04.472775 containerd[1586]: time="2026-03-02T12:55:04.472758231Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 12:55:04.476087 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 12:55:04.476543 containerd[1586]: time="2026-03-02T12:55:04.476517511Z" level=info msg="containerd successfully booted in 0.210853s" Mar 2 12:55:04.489516 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 40916 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:04.491792 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:04.601040 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 12:55:04.613916 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 12:55:04.623682 systemd-logind[1562]: New session 1 of user core. Mar 2 12:55:04.698152 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 12:55:04.723702 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 12:55:04.732636 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 12:55:05.024345 tar[1577]: linux-amd64/README.md Mar 2 12:55:05.051079 systemd[1675]: Queued start job for default target default.target. Mar 2 12:55:05.212794 systemd[1675]: Created slice app.slice - User Application Slice. Mar 2 12:55:05.215820 systemd[1675]: Reached target paths.target - Paths. Mar 2 12:55:05.215975 systemd[1675]: Reached target timers.target - Timers. Mar 2 12:55:05.232523 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 12:55:05.240905 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 12:55:05.244466 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 12:55:05.244534 systemd[1675]: Reached target sockets.target - Sockets. Mar 2 12:55:05.244549 systemd[1675]: Reached target basic.target - Basic System. Mar 2 12:55:05.244594 systemd[1675]: Reached target default.target - Main User Target. Mar 2 12:55:05.244635 systemd[1675]: Startup finished in 499ms. Mar 2 12:55:05.246177 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 12:55:05.261906 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 12:55:05.325703 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:40920.service - OpenSSH per-connection server daemon (10.0.0.1:40920). Mar 2 12:55:05.393203 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 40920 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:05.398025 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:05.403975 systemd-logind[1562]: New session 2 of user core. Mar 2 12:55:05.424027 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 12:55:05.681947 sshd[1692]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:05.692668 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:40934.service - OpenSSH per-connection server daemon (10.0.0.1:40934). Mar 2 12:55:05.698769 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:40920.service: Deactivated successfully. Mar 2 12:55:05.703118 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 12:55:05.706493 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Mar 2 12:55:05.709968 systemd-logind[1562]: Removed session 2. Mar 2 12:55:05.738553 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 40934 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:05.741181 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:05.747055 systemd-logind[1562]: New session 3 of user core. Mar 2 12:55:05.755805 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 12:55:05.820475 sshd[1697]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:05.978562 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:40934.service: Deactivated successfully. Mar 2 12:55:05.982642 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Mar 2 12:55:05.982927 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 12:55:05.984817 systemd-logind[1562]: Removed session 3. Mar 2 12:55:07.035529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:07.039951 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 12:55:07.043665 systemd[1]: Startup finished in 9.778s (kernel) + 9.944s (userspace) = 19.723s. Mar 2 12:55:07.045624 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:09.298300 kubelet[1716]: E0302 12:55:09.298031 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:09.305608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:09.307242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:15.838688 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:42026.service - OpenSSH per-connection server daemon (10.0.0.1:42026). Mar 2 12:55:15.877714 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 42026 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:15.880031 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:15.886610 systemd-logind[1562]: New session 4 of user core. Mar 2 12:55:15.907959 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 12:55:15.977829 sshd[1730]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:15.988808 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:42034.service - OpenSSH per-connection server daemon (10.0.0.1:42034). Mar 2 12:55:15.989678 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:42026.service: Deactivated successfully. Mar 2 12:55:15.993374 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Mar 2 12:55:15.994239 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 12:55:15.996069 systemd-logind[1562]: Removed session 4. Mar 2 12:55:16.028693 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 42034 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:16.030674 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:16.037445 systemd-logind[1562]: New session 5 of user core. Mar 2 12:55:16.051843 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 12:55:16.122529 sshd[1735]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:16.133800 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:42040.service - OpenSSH per-connection server daemon (10.0.0.1:42040). Mar 2 12:55:16.134646 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:42034.service: Deactivated successfully. Mar 2 12:55:16.139776 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 12:55:16.141159 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Mar 2 12:55:16.144110 systemd-logind[1562]: Removed session 5. Mar 2 12:55:16.175790 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 42040 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:16.179636 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:16.192842 systemd-logind[1562]: New session 6 of user core. Mar 2 12:55:16.199778 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 12:55:16.291658 sshd[1743]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:16.309100 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:42044.service - OpenSSH per-connection server daemon (10.0.0.1:42044). Mar 2 12:55:16.309800 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:42040.service: Deactivated successfully. Mar 2 12:55:16.315678 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 12:55:16.315824 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Mar 2 12:55:16.318787 systemd-logind[1562]: Removed session 6. Mar 2 12:55:16.352203 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 42044 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:16.354297 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:16.360694 systemd-logind[1562]: New session 7 of user core. Mar 2 12:55:16.374984 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 12:55:16.448254 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 12:55:16.448991 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:16.472101 sudo[1758]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:16.475487 sshd[1751]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:16.488922 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:42058.service - OpenSSH per-connection server daemon (10.0.0.1:42058). Mar 2 12:55:16.489946 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:42044.service: Deactivated successfully. Mar 2 12:55:16.494206 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 12:55:16.495523 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Mar 2 12:55:16.497832 systemd-logind[1562]: Removed session 7. Mar 2 12:55:16.540885 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 42058 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:16.543345 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:16.552587 systemd-logind[1562]: New session 8 of user core. Mar 2 12:55:16.571782 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 12:55:16.646548 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 12:55:16.647191 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:16.655979 sudo[1768]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:16.665064 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 12:55:16.665556 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:16.691767 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:16.695471 auditctl[1771]: No rules Mar 2 12:55:16.697844 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:55:16.698915 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:16.711031 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:16.761080 augenrules[1790]: No rules Mar 2 12:55:16.763786 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:16.765595 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:16.768364 sshd[1760]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:16.786972 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:42060.service - OpenSSH per-connection server daemon (10.0.0.1:42060). Mar 2 12:55:16.787751 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:42058.service: Deactivated successfully. Mar 2 12:55:16.791376 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Mar 2 12:55:16.792471 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 12:55:16.799326 systemd-logind[1562]: Removed session 8. Mar 2 12:55:16.851785 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 42060 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:55:16.857055 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:16.867067 systemd-logind[1562]: New session 9 of user core. Mar 2 12:55:16.875789 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 12:55:16.941247 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 12:55:16.941730 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:19.509326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 12:55:19.522641 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 12:55:19.523031 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 12:55:19.524503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:21.045129 dockerd[1822]: time="2026-03-02T12:55:21.044346296Z" level=info msg="Starting up" Mar 2 12:55:21.122071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:21.145100 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:21.850128 kubelet[1846]: E0302 12:55:21.849931 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:21.858718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:21.859223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:21.914059 dockerd[1822]: time="2026-03-02T12:55:21.913951631Z" level=info msg="Loading containers: start." Mar 2 12:55:22.310716 kernel: Initializing XFRM netlink socket Mar 2 12:55:22.435318 systemd-networkd[1244]: docker0: Link UP Mar 2 12:55:22.468062 dockerd[1822]: time="2026-03-02T12:55:22.467971284Z" level=info msg="Loading containers: done." Mar 2 12:55:22.625299 dockerd[1822]: time="2026-03-02T12:55:22.625127132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 12:55:22.625611 dockerd[1822]: time="2026-03-02T12:55:22.625468399Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 12:55:22.625742 dockerd[1822]: time="2026-03-02T12:55:22.625632797Z" level=info msg="Daemon has completed initialization" Mar 2 12:55:22.685258 dockerd[1822]: time="2026-03-02T12:55:22.685146562Z" level=info msg="API listen on /run/docker.sock" Mar 2 12:55:22.685571 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 12:55:23.926225 containerd[1586]: time="2026-03-02T12:55:23.925984947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 12:55:24.708969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659367075.mount: Deactivated successfully. Mar 2 12:55:28.486785 containerd[1586]: time="2026-03-02T12:55:28.486606197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:28.487672 containerd[1586]: time="2026-03-02T12:55:28.487048052Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 2 12:55:28.488779 containerd[1586]: time="2026-03-02T12:55:28.488717819Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:28.493193 containerd[1586]: time="2026-03-02T12:55:28.493103324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:28.495370 containerd[1586]: time="2026-03-02T12:55:28.495286610Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 4.569183573s" Mar 2 12:55:28.495370 containerd[1586]: time="2026-03-02T12:55:28.495374705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 2 12:55:28.501724 containerd[1586]: time="2026-03-02T12:55:28.501654852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 12:55:30.999818 containerd[1586]: time="2026-03-02T12:55:30.999726430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:31.001280 containerd[1586]: time="2026-03-02T12:55:31.001141684Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 2 12:55:31.002247 containerd[1586]: time="2026-03-02T12:55:31.002195811Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:31.005776 containerd[1586]: time="2026-03-02T12:55:31.005704234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:31.007710 containerd[1586]: time="2026-03-02T12:55:31.007536454Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.505804698s" Mar 2 12:55:31.007710 containerd[1586]: time="2026-03-02T12:55:31.007642000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 2 12:55:31.010925 containerd[1586]: time="2026-03-02T12:55:31.010901151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 12:55:32.143069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 12:55:32.158826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:32.540002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:32.566334 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:32.799801 kubelet[2071]: E0302 12:55:32.799258 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:32.803914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:32.804461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:33.391359 containerd[1586]: time="2026-03-02T12:55:33.391236309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:33.392803 containerd[1586]: time="2026-03-02T12:55:33.392742624Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 2 12:55:33.395266 containerd[1586]: time="2026-03-02T12:55:33.395035065Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:33.400731 containerd[1586]: time="2026-03-02T12:55:33.400643177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:33.402830 containerd[1586]: time="2026-03-02T12:55:33.402760309Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.391716462s" Mar 2 12:55:33.402957 containerd[1586]: time="2026-03-02T12:55:33.402845397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 2 12:55:33.406085 containerd[1586]: time="2026-03-02T12:55:33.406029928Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 12:55:35.247689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825751137.mount: Deactivated successfully. Mar 2 12:55:36.082144 containerd[1586]: time="2026-03-02T12:55:36.081928843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:36.083222 containerd[1586]: time="2026-03-02T12:55:36.083099630Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 2 12:55:36.084657 containerd[1586]: time="2026-03-02T12:55:36.084610822Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:36.087590 containerd[1586]: time="2026-03-02T12:55:36.087517390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:36.088699 containerd[1586]: time="2026-03-02T12:55:36.088645773Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.682569109s" Mar 2 12:55:36.088764 containerd[1586]: time="2026-03-02T12:55:36.088711145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 2 12:55:36.091104 containerd[1586]: time="2026-03-02T12:55:36.090999075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 12:55:36.613211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651791326.mount: Deactivated successfully. Mar 2 12:55:38.311470 containerd[1586]: time="2026-03-02T12:55:38.311248925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:38.312749 containerd[1586]: time="2026-03-02T12:55:38.312119927Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 2 12:55:38.314366 containerd[1586]: time="2026-03-02T12:55:38.314267343Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:38.318838 containerd[1586]: time="2026-03-02T12:55:38.318748209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:38.320479 containerd[1586]: time="2026-03-02T12:55:38.320370897Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.229310788s" Mar 2 12:55:38.320617 containerd[1586]: time="2026-03-02T12:55:38.320512130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 2 12:55:38.323301 containerd[1586]: time="2026-03-02T12:55:38.323011771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 12:55:38.800367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278332456.mount: Deactivated successfully. Mar 2 12:55:38.808999 containerd[1586]: time="2026-03-02T12:55:38.808898951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:38.810185 containerd[1586]: time="2026-03-02T12:55:38.810077530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 12:55:38.811685 containerd[1586]: time="2026-03-02T12:55:38.811543305Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:38.816565 containerd[1586]: time="2026-03-02T12:55:38.816504697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:38.818956 containerd[1586]: time="2026-03-02T12:55:38.818750871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.698736ms" Mar 2 12:55:38.818956 containerd[1586]: time="2026-03-02T12:55:38.818900689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 2 12:55:38.821819 containerd[1586]: time="2026-03-02T12:55:38.821696505Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 12:55:39.311958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109581783.mount: Deactivated successfully. Mar 2 12:55:41.185592 containerd[1586]: time="2026-03-02T12:55:41.185486947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:41.186466 containerd[1586]: time="2026-03-02T12:55:41.186142007Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 2 12:55:41.187833 containerd[1586]: time="2026-03-02T12:55:41.187761011Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:41.298889 containerd[1586]: time="2026-03-02T12:55:41.298322139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:41.301266 containerd[1586]: time="2026-03-02T12:55:41.301142387Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.479329956s" Mar 2 12:55:41.301266 containerd[1586]: time="2026-03-02T12:55:41.301232594Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 2 12:55:43.064222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 12:55:43.167433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:43.373876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:43.380444 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:43.443214 kubelet[2245]: E0302 12:55:43.443062 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:43.448533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:43.449048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:44.374934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:44.386948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:44.637994 systemd[1]: Reloading requested from client PID 2263 ('systemctl') (unit session-9.scope)... Mar 2 12:55:44.638043 systemd[1]: Reloading... Mar 2 12:55:45.043872 zram_generator::config[2302]: No configuration found. Mar 2 12:55:46.066303 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:55:46.154461 systemd[1]: Reloading finished in 1515 ms. Mar 2 12:55:46.209086 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 12:55:46.209213 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 12:55:46.210062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:46.220671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:46.433541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:46.453122 (kubelet)[2360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:55:46.508294 kubelet[2360]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:55:46.508294 kubelet[2360]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:55:46.508294 kubelet[2360]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:55:46.509160 kubelet[2360]: I0302 12:55:46.508885 2360 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:55:46.688513 kubelet[2360]: I0302 12:55:46.688240 2360 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 12:55:46.688513 kubelet[2360]: I0302 12:55:46.688306 2360 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:55:46.688731 kubelet[2360]: I0302 12:55:46.688667 2360 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:55:46.716169 kubelet[2360]: E0302 12:55:46.716030 2360 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:55:46.720445 kubelet[2360]: I0302 12:55:46.717779 2360 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:55:46.727242 kubelet[2360]: E0302 12:55:46.727144 2360 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:55:46.727242 kubelet[2360]: I0302 12:55:46.727202 2360 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 12:55:46.736455 kubelet[2360]: I0302 12:55:46.736325 2360 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 12:55:46.737309 kubelet[2360]: I0302 12:55:46.737225 2360 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:55:46.737639 kubelet[2360]: I0302 12:55:46.737280 2360 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 2 12:55:46.737953 kubelet[2360]: I0302 12:55:46.737707 2360 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:55:46.737953 kubelet[2360]: I0302 12:55:46.737719 2360 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 12:55:46.738191 kubelet[2360]: I0302 12:55:46.738155 2360 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:55:46.741982 kubelet[2360]: I0302 12:55:46.741935 2360 kubelet.go:480] "Attempting to sync node with API server" Mar 2 12:55:46.742041 kubelet[2360]: I0302 12:55:46.741993 2360 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:55:46.742203 kubelet[2360]: I0302 12:55:46.742163 2360 kubelet.go:386] "Adding apiserver pod source" Mar 2 12:55:46.744358 kubelet[2360]: I0302 12:55:46.743944 2360 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:55:46.749227 kubelet[2360]: I0302 12:55:46.749147 2360 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:55:46.749915 kubelet[2360]: I0302 12:55:46.749877 2360 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:55:46.751105 kubelet[2360]: W0302 12:55:46.751049 2360 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 12:55:46.752353 kubelet[2360]: E0302 12:55:46.751752 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:55:46.752353 kubelet[2360]: E0302 12:55:46.751761 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:55:46.756188 kubelet[2360]: I0302 12:55:46.756114 2360 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 12:55:46.756255 kubelet[2360]: I0302 12:55:46.756246 2360 server.go:1289] "Started kubelet" Mar 2 12:55:46.756913 kubelet[2360]: I0302 12:55:46.756503 2360 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:55:46.757554 kubelet[2360]: I0302 12:55:46.757351 2360 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:55:46.758502 kubelet[2360]: I0302 12:55:46.758377 2360 server.go:317] "Adding debug handlers to kubelet server" Mar 2 12:55:46.759246 kubelet[2360]: I0302 12:55:46.759186 2360 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:55:46.759602 kubelet[2360]: I0302 12:55:46.759539 2360 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:55:46.761583 kubelet[2360]: I0302 12:55:46.761492 2360 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:55:46.762369 kubelet[2360]: E0302 12:55:46.758691 2360 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189907797f5bcecd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:55:46.756161229 +0000 UTC m=+0.296151716,LastTimestamp:2026-03-02 12:55:46.756161229 +0000 UTC m=+0.296151716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:55:46.762650 kubelet[2360]: E0302 12:55:46.762606 2360 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:55:46.762872 kubelet[2360]: I0302 12:55:46.762775 2360 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 12:55:46.763350 kubelet[2360]: I0302 12:55:46.763324 2360 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 12:55:46.763766 kubelet[2360]: I0302 12:55:46.763730 2360 reconciler.go:26] "Reconciler: start to sync state" Mar 2 12:55:46.764511 kubelet[2360]: E0302 12:55:46.764344 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:55:46.765609 kubelet[2360]: E0302 12:55:46.765435 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Mar 2 12:55:46.766882 kubelet[2360]: I0302 12:55:46.766792 2360 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:55:46.767017 kubelet[2360]: E0302 12:55:46.766955 2360 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:55:46.767017 kubelet[2360]: I0302 12:55:46.766990 2360 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:55:46.769016 kubelet[2360]: I0302 12:55:46.768975 2360 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:55:46.803099 kubelet[2360]: I0302 12:55:46.803035 2360 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 12:55:46.805298 kubelet[2360]: I0302 12:55:46.805074 2360 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:55:46.805298 kubelet[2360]: I0302 12:55:46.805093 2360 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:55:46.805298 kubelet[2360]: I0302 12:55:46.805114 2360 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:55:46.805298 kubelet[2360]: I0302 12:55:46.805230 2360 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 12:55:46.805516 kubelet[2360]: I0302 12:55:46.805343 2360 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 12:55:46.805516 kubelet[2360]: I0302 12:55:46.805367 2360 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:55:46.805562 kubelet[2360]: I0302 12:55:46.805531 2360 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 12:55:46.806438 kubelet[2360]: E0302 12:55:46.806275 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:55:46.806913 kubelet[2360]: E0302 12:55:46.806852 2360 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:55:46.861995 kubelet[2360]: I0302 12:55:46.861876 2360 policy_none.go:49] "None policy: Start" Mar 2 12:55:46.862259 kubelet[2360]: I0302 12:55:46.862168 2360 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 12:55:46.862424 kubelet[2360]: I0302 12:55:46.862328 2360 state_mem.go:35] "Initializing new in-memory state store" Mar 2 12:55:46.863441 kubelet[2360]: E0302 12:55:46.863241 2360 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:55:46.871589 kubelet[2360]: E0302 12:55:46.871523 2360 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:55:46.871979 kubelet[2360]: I0302 12:55:46.871922 2360 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:55:46.872090 kubelet[2360]: I0302 12:55:46.871960 2360 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:55:46.873127 kubelet[2360]: I0302 12:55:46.873018 2360 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:55:46.874027 kubelet[2360]: E0302 12:55:46.873987 2360 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:55:46.874087 kubelet[2360]: E0302 12:55:46.874073 2360 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 12:55:46.927347 kubelet[2360]: E0302 12:55:46.927210 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:46.928791 kubelet[2360]: E0302 12:55:46.928728 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:46.930655 kubelet[2360]: E0302 12:55:46.930617 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:46.965126 kubelet[2360]: I0302 12:55:46.964976 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc5c6e4b0f839a44a55432d9a19e94c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc5c6e4b0f839a44a55432d9a19e94c5\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:46.965126 kubelet[2360]: I0302 12:55:46.965035 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc5c6e4b0f839a44a55432d9a19e94c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dc5c6e4b0f839a44a55432d9a19e94c5\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:46.965126 kubelet[2360]: I0302 12:55:46.965055 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:46.965126 kubelet[2360]: I0302 12:55:46.965070 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:46.965126 kubelet[2360]: I0302 12:55:46.965087 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:46.965509 kubelet[2360]: I0302 12:55:46.965100 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:46.965509 kubelet[2360]: I0302 12:55:46.965113 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:46.965509 kubelet[2360]: I0302 12:55:46.965128 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:46.965509 kubelet[2360]: I0302 12:55:46.965141 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc5c6e4b0f839a44a55432d9a19e94c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc5c6e4b0f839a44a55432d9a19e94c5\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:46.966726 kubelet[2360]: E0302 12:55:46.966638 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Mar 2 12:55:46.974175 kubelet[2360]: I0302 12:55:46.974138 2360 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:55:46.974621 kubelet[2360]: E0302 12:55:46.974557 2360 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Mar 2 12:55:47.177270 kubelet[2360]: I0302 12:55:47.177221 2360 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:55:47.177926 kubelet[2360]: E0302 12:55:47.177764 2360 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Mar 2 12:55:47.229049 kubelet[2360]: E0302 12:55:47.228795 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:47.229559 kubelet[2360]: E0302 12:55:47.229489 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:47.230593 containerd[1586]: time="2026-03-02T12:55:47.230525090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dc5c6e4b0f839a44a55432d9a19e94c5,Namespace:kube-system,Attempt:0,}" Mar 2 12:55:47.231257 containerd[1586]: time="2026-03-02T12:55:47.230527452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 2 12:55:47.231963 kubelet[2360]: E0302 12:55:47.231926 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:47.232781 containerd[1586]: time="2026-03-02T12:55:47.232661804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 2 12:55:47.377748 kubelet[2360]: E0302 12:55:47.377578 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Mar 2 12:55:47.581641 kubelet[2360]: I0302 12:55:47.581550 2360 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:55:47.582207 kubelet[2360]: E0302 12:55:47.582051 2360 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Mar 2 12:55:47.699181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341773449.mount: Deactivated successfully. Mar 2 12:55:47.711923 containerd[1586]: time="2026-03-02T12:55:47.711725047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:55:47.714298 containerd[1586]: time="2026-03-02T12:55:47.714246413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:55:47.715712 containerd[1586]: time="2026-03-02T12:55:47.715655401Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:55:47.717131 containerd[1586]: time="2026-03-02T12:55:47.717095124Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:55:47.719313 containerd[1586]: time="2026-03-02T12:55:47.719251858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:55:47.719791 containerd[1586]: time="2026-03-02T12:55:47.719729713Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:55:47.721187 containerd[1586]: time="2026-03-02T12:55:47.721079855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 12:55:47.722576 containerd[1586]: time="2026-03-02T12:55:47.722528897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:55:47.723862 containerd[1586]: time="2026-03-02T12:55:47.723801398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.991307ms" Mar 2 12:55:47.728262 containerd[1586]: time="2026-03-02T12:55:47.728191645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.305136ms" Mar 2 12:55:47.729963 containerd[1586]: time="2026-03-02T12:55:47.729815875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.056309ms" Mar 2 12:55:47.843595 kubelet[2360]: E0302 12:55:47.842585 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:55:47.913125 kubelet[2360]: E0302 12:55:47.912912 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:55:47.920613 kubelet[2360]: E0302 12:55:47.920511 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:55:47.986749 containerd[1586]: time="2026-03-02T12:55:47.986470718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:55:47.988469 containerd[1586]: time="2026-03-02T12:55:47.988083334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:55:47.988469 containerd[1586]: time="2026-03-02T12:55:47.988194281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:55:47.988562 containerd[1586]: time="2026-03-02T12:55:47.988500060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:55:48.007470 containerd[1586]: time="2026-03-02T12:55:48.007164599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:55:48.007470 containerd[1586]: time="2026-03-02T12:55:48.007341129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:55:48.008688 containerd[1586]: time="2026-03-02T12:55:48.007521785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:55:48.008688 containerd[1586]: time="2026-03-02T12:55:48.007919827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:55:48.014209 containerd[1586]: time="2026-03-02T12:55:48.014071058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:55:48.014264 containerd[1586]: time="2026-03-02T12:55:48.014221708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:55:48.014264 containerd[1586]: time="2026-03-02T12:55:48.014250392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:55:48.014912 containerd[1586]: time="2026-03-02T12:55:48.014602037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:55:48.262931 kubelet[2360]: E0302 12:55:48.261686 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Mar 2 12:55:48.291431 kubelet[2360]: E0302 12:55:48.285643 2360 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:55:48.367058 update_engine[1565]: I20260302 12:55:48.350203 1565 update_attempter.cc:509] Updating boot flags... Mar 2 12:55:48.387799 kubelet[2360]: I0302 12:55:48.387369 2360 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:55:48.392461 kubelet[2360]: E0302 12:55:48.388697 2360 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Mar 2 12:55:48.458640 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2482) Mar 2 12:55:48.626443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2483) Mar 2 12:55:48.630537 containerd[1586]: time="2026-03-02T12:55:48.630368871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8b15a8b223186f5c43d8b18f28dbd58ddb0a60c2b215697c8e3e1b6add8a023\"" Mar 2 12:55:48.633488 kubelet[2360]: E0302 12:55:48.633330 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:48.643105 containerd[1586]: time="2026-03-02T12:55:48.643054302Z" level=info msg="CreateContainer within sandbox \"d8b15a8b223186f5c43d8b18f28dbd58ddb0a60c2b215697c8e3e1b6add8a023\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 12:55:48.653448 containerd[1586]: time="2026-03-02T12:55:48.651696728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dc5c6e4b0f839a44a55432d9a19e94c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ca50484e17103e4b94d1812c2c82970d9d34f4f776a3de27dc8bdbec78a8c45\"" Mar 2 12:55:48.655266 kubelet[2360]: E0302 12:55:48.655207 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:48.658357 containerd[1586]: time="2026-03-02T12:55:48.658282713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"283f4e08833e43ba94cf996b8afa8f58c5a5e78c6a15524b8b871d4dba7a0fcf\"" Mar 2 12:55:48.659543 kubelet[2360]: E0302 12:55:48.659340 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:48.661364 containerd[1586]: time="2026-03-02T12:55:48.661307069Z" level=info msg="CreateContainer within sandbox \"8ca50484e17103e4b94d1812c2c82970d9d34f4f776a3de27dc8bdbec78a8c45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 12:55:48.668601 containerd[1586]: time="2026-03-02T12:55:48.668533587Z" level=info msg="CreateContainer within sandbox \"283f4e08833e43ba94cf996b8afa8f58c5a5e78c6a15524b8b871d4dba7a0fcf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 12:55:48.675435 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2483) Mar 2 12:55:48.675511 containerd[1586]: time="2026-03-02T12:55:48.675453566Z" level=info msg="CreateContainer within sandbox \"d8b15a8b223186f5c43d8b18f28dbd58ddb0a60c2b215697c8e3e1b6add8a023\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b60f89efc8ecea621d7e26028f25bfa24f37fac1d29569ab7d81fbe167868119\"" Mar 2 12:55:48.676235 containerd[1586]: time="2026-03-02T12:55:48.676154097Z" level=info msg="StartContainer for \"b60f89efc8ecea621d7e26028f25bfa24f37fac1d29569ab7d81fbe167868119\"" Mar 2 12:55:48.701757 containerd[1586]: time="2026-03-02T12:55:48.701726677Z" level=info msg="CreateContainer within sandbox \"8ca50484e17103e4b94d1812c2c82970d9d34f4f776a3de27dc8bdbec78a8c45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"699e2fe6b692583d76e7a3778966b6f358146e46935970f97c11956cd55bff22\"" Mar 2 12:55:48.705184 containerd[1586]: time="2026-03-02T12:55:48.705113848Z" level=info msg="StartContainer for \"699e2fe6b692583d76e7a3778966b6f358146e46935970f97c11956cd55bff22\"" Mar 2 12:55:48.711810 containerd[1586]: time="2026-03-02T12:55:48.711764317Z" level=info msg="CreateContainer within sandbox \"283f4e08833e43ba94cf996b8afa8f58c5a5e78c6a15524b8b871d4dba7a0fcf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32f59b8af9eafd4bd5fcf44b275706ef5258c44a244d4ce69b33ed7785e1e554\"" Mar 2 12:55:48.717951 containerd[1586]: time="2026-03-02T12:55:48.717727826Z" level=info msg="StartContainer for \"32f59b8af9eafd4bd5fcf44b275706ef5258c44a244d4ce69b33ed7785e1e554\"" Mar 2 12:55:48.731956 kubelet[2360]: E0302 12:55:48.731868 2360 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:55:48.852429 containerd[1586]: time="2026-03-02T12:55:48.851669865Z" level=info msg="StartContainer for \"b60f89efc8ecea621d7e26028f25bfa24f37fac1d29569ab7d81fbe167868119\" returns successfully" Mar 2 12:55:48.863455 containerd[1586]: time="2026-03-02T12:55:48.860703440Z" level=info msg="StartContainer for \"32f59b8af9eafd4bd5fcf44b275706ef5258c44a244d4ce69b33ed7785e1e554\" returns successfully" Mar 2 12:55:48.868703 containerd[1586]: time="2026-03-02T12:55:48.868591333Z" level=info msg="StartContainer for \"699e2fe6b692583d76e7a3778966b6f358146e46935970f97c11956cd55bff22\" returns successfully" Mar 2 12:55:49.006979 kubelet[2360]: E0302 12:55:49.006709 2360 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189907797f5bcecd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:55:46.756161229 +0000 UTC m=+0.296151716,LastTimestamp:2026-03-02 12:55:46.756161229 +0000 UTC m=+0.296151716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:55:49.992592 kubelet[2360]: I0302 12:55:49.992535 2360 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:55:50.005480 kubelet[2360]: E0302 12:55:50.001775 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:50.005480 kubelet[2360]: E0302 12:55:50.002003 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:50.005480 kubelet[2360]: E0302 12:55:50.002262 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:50.005480 kubelet[2360]: E0302 12:55:50.002340 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:50.019445 kubelet[2360]: E0302 12:55:50.018800 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:50.019445 kubelet[2360]: E0302 12:55:50.019025 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:51.371305 kubelet[2360]: E0302 12:55:51.371230 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:51.372315 kubelet[2360]: E0302 12:55:51.371546 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:51.372315 kubelet[2360]: E0302 12:55:51.371775 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:51.372315 kubelet[2360]: E0302 12:55:51.371913 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:51.372315 kubelet[2360]: E0302 12:55:51.372127 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:51.372315 kubelet[2360]: E0302 12:55:51.372211 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:52.422018 kubelet[2360]: E0302 12:55:52.421972 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:52.424312 kubelet[2360]: E0302 12:55:52.421979 2360 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:55:52.424629 kubelet[2360]: E0302 12:55:52.424586 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:52.424716 kubelet[2360]: E0302 12:55:52.424693 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:53.277653 kubelet[2360]: E0302 12:55:53.277585 2360 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 12:55:53.354219 kubelet[2360]: I0302 12:55:53.354169 2360 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:55:53.365242 kubelet[2360]: I0302 12:55:53.365162 2360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:53.532251 kubelet[2360]: I0302 12:55:53.530912 2360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:53.548194 kubelet[2360]: E0302 12:55:53.547724 2360 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:53.548676 kubelet[2360]: E0302 12:55:53.548511 2360 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:53.548726 kubelet[2360]: E0302 12:55:53.548683 2360 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:53.548726 kubelet[2360]: I0302 12:55:53.548705 2360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:53.551852 kubelet[2360]: E0302 12:55:53.551686 2360 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:53.551852 kubelet[2360]: I0302 12:55:53.551747 2360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:53.553932 kubelet[2360]: E0302 12:55:53.553907 2360 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:54.200261 kubelet[2360]: I0302 12:55:54.199752 2360 apiserver.go:52] "Watching apiserver" Mar 2 12:55:54.363739 kubelet[2360]: I0302 12:55:54.363642 2360 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 12:55:55.916846 systemd[1]: Reloading requested from client PID 2667 ('systemctl') (unit session-9.scope)... Mar 2 12:55:55.916881 systemd[1]: Reloading... Mar 2 12:55:56.026604 zram_generator::config[2706]: No configuration found. Mar 2 12:55:56.186798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:55:56.276685 systemd[1]: Reloading finished in 359 ms. Mar 2 12:55:56.324352 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:56.347781 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 12:55:56.348359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:56.364989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:56.547717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:56.563026 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:55:56.639954 kubelet[2761]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:55:56.639954 kubelet[2761]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:55:56.639954 kubelet[2761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:55:56.640543 kubelet[2761]: I0302 12:55:56.639975 2761 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:55:56.649642 kubelet[2761]: I0302 12:55:56.649516 2761 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 12:55:56.649642 kubelet[2761]: I0302 12:55:56.649565 2761 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:55:56.651726 kubelet[2761]: I0302 12:55:56.650503 2761 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:55:56.653221 kubelet[2761]: I0302 12:55:56.653156 2761 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 12:55:56.656350 kubelet[2761]: I0302 12:55:56.656235 2761 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:55:56.661711 kubelet[2761]: E0302 12:55:56.661662 2761 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:55:56.661790 kubelet[2761]: I0302 12:55:56.661714 2761 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 12:55:56.669134 kubelet[2761]: I0302 12:55:56.669029 2761 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 12:55:56.670001 kubelet[2761]: I0302 12:55:56.669883 2761 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:55:56.670184 kubelet[2761]: I0302 12:55:56.669947 2761 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 2 12:55:56.670184 kubelet[2761]: I0302 12:55:56.670144 2761 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:55:56.670184 kubelet[2761]: I0302 12:55:56.670157 2761 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 12:55:56.670439 kubelet[2761]: I0302 12:55:56.670218 2761 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:55:56.670601 kubelet[2761]: I0302 12:55:56.670546 2761 kubelet.go:480] "Attempting to sync node with API server" Mar 2 12:55:56.670601 kubelet[2761]: I0302 12:55:56.670589 2761 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:55:56.670661 kubelet[2761]: I0302 12:55:56.670617 2761 kubelet.go:386] "Adding apiserver pod source" Mar 2 12:55:56.670661 kubelet[2761]: I0302 12:55:56.670637 2761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:55:56.673191 kubelet[2761]: I0302 12:55:56.673121 2761 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:55:56.676096 kubelet[2761]: I0302 12:55:56.673768 2761 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:55:56.688586 kubelet[2761]: I0302 12:55:56.686724 2761 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 12:55:56.688586 kubelet[2761]: I0302 12:55:56.686766 2761 server.go:1289] "Started kubelet" Mar 2 12:55:56.689232 kubelet[2761]: I0302 12:55:56.689203 2761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:55:56.690516 kubelet[2761]: I0302 12:55:56.690196 2761 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 12:55:56.690610 kubelet[2761]: I0302 12:55:56.690540 2761 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 12:55:56.691471 kubelet[2761]: I0302 12:55:56.691173 2761 reconciler.go:26] "Reconciler: start to sync state" Mar 2 12:55:56.695255 kubelet[2761]: I0302 12:55:56.693288 2761 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:55:56.695255 kubelet[2761]: I0302 12:55:56.693553 2761 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:55:56.695255 kubelet[2761]: I0302 12:55:56.694052 2761 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:55:56.695255 kubelet[2761]: I0302 12:55:56.694598 2761 server.go:317] "Adding debug handlers to kubelet server" Mar 2 12:55:56.695255 kubelet[2761]: I0302 12:55:56.694682 2761 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:55:56.698012 kubelet[2761]: I0302 12:55:56.697731 2761 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:55:56.698012 kubelet[2761]: I0302 12:55:56.697900 2761 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:55:56.702336 kubelet[2761]: I0302 12:55:56.702286 2761 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:55:56.703692 kubelet[2761]: E0302 12:55:56.703619 2761 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:55:56.733741 kubelet[2761]: I0302 12:55:56.733511 2761 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 12:55:56.736996 kubelet[2761]: I0302 12:55:56.736890 2761 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 12:55:56.736996 kubelet[2761]: I0302 12:55:56.736939 2761 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 12:55:56.736996 kubelet[2761]: I0302 12:55:56.736966 2761 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:55:56.746792 kubelet[2761]: I0302 12:55:56.746063 2761 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 12:55:56.746792 kubelet[2761]: E0302 12:55:56.746140 2761 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:55:56.793792 kubelet[2761]: I0302 12:55:56.793718 2761 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:55:56.793792 kubelet[2761]: I0302 12:55:56.793766 2761 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:55:56.793792 kubelet[2761]: I0302 12:55:56.793790 2761 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:55:56.794021 kubelet[2761]: I0302 12:55:56.793996 2761 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 12:55:56.794049 kubelet[2761]: I0302 12:55:56.794008 2761 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 12:55:56.794049 kubelet[2761]: I0302 12:55:56.794030 2761 policy_none.go:49] "None policy: Start" Mar 2 12:55:56.794049 kubelet[2761]: I0302 12:55:56.794043 2761 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 12:55:56.794117 kubelet[2761]: I0302 12:55:56.794056 2761 state_mem.go:35] "Initializing new in-memory state store" Mar 2 12:55:56.794203 kubelet[2761]: I0302 12:55:56.794164 2761 state_mem.go:75] "Updated machine memory state" Mar 2 12:55:56.798691 kubelet[2761]: E0302 12:55:56.796527 2761 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:55:56.798691 kubelet[2761]: I0302 12:55:56.796789 2761 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:55:56.798691 kubelet[2761]: I0302 12:55:56.796890 2761 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:55:56.798691 kubelet[2761]: I0302 12:55:56.797237 2761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:55:56.800354 kubelet[2761]: E0302 12:55:56.800333 2761 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:55:56.847888 kubelet[2761]: I0302 12:55:56.847746 2761 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:56.848160 kubelet[2761]: I0302 12:55:56.847754 2761 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:56.848756 kubelet[2761]: I0302 12:55:56.848655 2761 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:56.892988 kubelet[2761]: I0302 12:55:56.892697 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc5c6e4b0f839a44a55432d9a19e94c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc5c6e4b0f839a44a55432d9a19e94c5\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:56.892988 kubelet[2761]: I0302 12:55:56.892744 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc5c6e4b0f839a44a55432d9a19e94c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc5c6e4b0f839a44a55432d9a19e94c5\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:56.892988 kubelet[2761]: I0302 12:55:56.892772 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc5c6e4b0f839a44a55432d9a19e94c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dc5c6e4b0f839a44a55432d9a19e94c5\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:56.892988 kubelet[2761]: I0302 12:55:56.892802 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:56.892988 kubelet[2761]: I0302 12:55:56.892877 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:56.893290 kubelet[2761]: I0302 12:55:56.892898 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:56.893290 kubelet[2761]: I0302 12:55:56.892917 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:56.896496 kubelet[2761]: I0302 12:55:56.893730 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:55:56.896496 kubelet[2761]: I0302 12:55:56.893779 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:56.910073 kubelet[2761]: I0302 12:55:56.910000 2761 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:55:56.922205 kubelet[2761]: I0302 12:55:56.921369 2761 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 12:55:56.922905 kubelet[2761]: I0302 12:55:56.922775 2761 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:55:57.156675 kubelet[2761]: E0302 12:55:57.156547 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:57.161065 kubelet[2761]: E0302 12:55:57.160972 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:57.162201 kubelet[2761]: E0302 12:55:57.162048 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:57.675881 kubelet[2761]: I0302 12:55:57.675730 2761 apiserver.go:52] "Watching apiserver" Mar 2 12:55:57.800473 kubelet[2761]: I0302 12:55:57.800053 2761 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 12:55:57.900259 kubelet[2761]: I0302 12:55:57.900152 2761 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:57.901792 kubelet[2761]: I0302 12:55:57.901714 2761 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:58.530956 kubelet[2761]: E0302 12:55:58.530855 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:58.571922 kubelet[2761]: E0302 12:55:58.570693 2761 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 12:55:58.571922 kubelet[2761]: E0302 12:55:58.571247 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:58.571922 kubelet[2761]: E0302 12:55:58.570692 2761 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 12:55:58.577865 kubelet[2761]: E0302 12:55:58.577751 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:58.655485 kubelet[2761]: I0302 12:55:58.653056 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.652997052 podStartE2EDuration="2.652997052s" podCreationTimestamp="2026-03-02 12:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:55:58.651759635 +0000 UTC m=+2.077475665" watchObservedRunningTime="2026-03-02 12:55:58.652997052 +0000 UTC m=+2.078713082" Mar 2 12:55:58.701479 kubelet[2761]: I0302 12:55:58.701336 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.7013198320000003 podStartE2EDuration="2.701319832s" podCreationTimestamp="2026-03-02 12:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:55:58.700378415 +0000 UTC m=+2.126094445" watchObservedRunningTime="2026-03-02 12:55:58.701319832 +0000 UTC m=+2.127035861" Mar 2 12:55:58.702256 kubelet[2761]: I0302 12:55:58.701517 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.701510096 podStartE2EDuration="2.701510096s" podCreationTimestamp="2026-03-02 12:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:55:58.677181443 +0000 UTC m=+2.102897473" watchObservedRunningTime="2026-03-02 12:55:58.701510096 +0000 UTC m=+2.127226126" Mar 2 12:55:58.902542 kubelet[2761]: E0302 12:55:58.902065 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:58.902719 kubelet[2761]: E0302 12:55:58.902289 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:00.024612 kubelet[2761]: E0302 12:56:00.024242 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:00.909360 kubelet[2761]: E0302 12:56:00.909282 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:01.715532 kubelet[2761]: I0302 12:56:01.715361 2761 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 12:56:01.717509 kubelet[2761]: I0302 12:56:01.717490 2761 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 12:56:01.717662 containerd[1586]: time="2026-03-02T12:56:01.716967475Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 12:56:02.269522 kubelet[2761]: I0302 12:56:02.269325 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7fbf4b29-8a16-4a35-b4d3-553bf8fcb350-kube-proxy\") pod \"kube-proxy-96p2m\" (UID: \"7fbf4b29-8a16-4a35-b4d3-553bf8fcb350\") " pod="kube-system/kube-proxy-96p2m" Mar 2 12:56:02.269522 kubelet[2761]: I0302 12:56:02.269438 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fbf4b29-8a16-4a35-b4d3-553bf8fcb350-xtables-lock\") pod \"kube-proxy-96p2m\" (UID: \"7fbf4b29-8a16-4a35-b4d3-553bf8fcb350\") " pod="kube-system/kube-proxy-96p2m" Mar 2 12:56:02.269522 kubelet[2761]: I0302 12:56:02.269547 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fbf4b29-8a16-4a35-b4d3-553bf8fcb350-lib-modules\") pod \"kube-proxy-96p2m\" (UID: \"7fbf4b29-8a16-4a35-b4d3-553bf8fcb350\") " pod="kube-system/kube-proxy-96p2m" Mar 2 12:56:02.269522 kubelet[2761]: I0302 12:56:02.269565 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl85s\" (UniqueName: \"kubernetes.io/projected/7fbf4b29-8a16-4a35-b4d3-553bf8fcb350-kube-api-access-bl85s\") pod \"kube-proxy-96p2m\" (UID: \"7fbf4b29-8a16-4a35-b4d3-553bf8fcb350\") " pod="kube-system/kube-proxy-96p2m" Mar 2 12:56:02.345500 kubelet[2761]: E0302 12:56:02.344899 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:02.379351 kubelet[2761]: E0302 12:56:02.379220 2761 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 2 12:56:02.379351 kubelet[2761]: E0302 12:56:02.379281 2761 projected.go:194] Error preparing data for projected volume kube-api-access-bl85s for pod kube-system/kube-proxy-96p2m: configmap "kube-root-ca.crt" not found Mar 2 12:56:02.379629 kubelet[2761]: E0302 12:56:02.379482 2761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fbf4b29-8a16-4a35-b4d3-553bf8fcb350-kube-api-access-bl85s podName:7fbf4b29-8a16-4a35-b4d3-553bf8fcb350 nodeName:}" failed. No retries permitted until 2026-03-02 12:56:02.879462037 +0000 UTC m=+6.305178067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bl85s" (UniqueName: "kubernetes.io/projected/7fbf4b29-8a16-4a35-b4d3-553bf8fcb350-kube-api-access-bl85s") pod "kube-proxy-96p2m" (UID: "7fbf4b29-8a16-4a35-b4d3-553bf8fcb350") : configmap "kube-root-ca.crt" not found Mar 2 12:56:02.599355 kubelet[2761]: E0302 12:56:02.599252 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:02.914790 kubelet[2761]: E0302 12:56:02.914109 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:02.914790 kubelet[2761]: E0302 12:56:02.914480 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:02.974449 kubelet[2761]: I0302 12:56:02.974285 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e196f74-bd1b-4e0d-ac7a-5e3a7d334deb-var-lib-calico\") pod \"tigera-operator-7d4578d8d-9lpgn\" (UID: \"9e196f74-bd1b-4e0d-ac7a-5e3a7d334deb\") " pod="tigera-operator/tigera-operator-7d4578d8d-9lpgn" Mar 2 12:56:02.974449 kubelet[2761]: I0302 12:56:02.974469 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nbc\" (UniqueName: \"kubernetes.io/projected/9e196f74-bd1b-4e0d-ac7a-5e3a7d334deb-kube-api-access-r7nbc\") pod \"tigera-operator-7d4578d8d-9lpgn\" (UID: \"9e196f74-bd1b-4e0d-ac7a-5e3a7d334deb\") " pod="tigera-operator/tigera-operator-7d4578d8d-9lpgn" Mar 2 12:56:03.033728 kubelet[2761]: E0302 12:56:03.033611 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:03.035140 containerd[1586]: time="2026-03-02T12:56:03.035030062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96p2m,Uid:7fbf4b29-8a16-4a35-b4d3-553bf8fcb350,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:03.083030 containerd[1586]: time="2026-03-02T12:56:03.082529728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:03.083030 containerd[1586]: time="2026-03-02T12:56:03.082916811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:03.083030 containerd[1586]: time="2026-03-02T12:56:03.082934474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:03.087023 containerd[1586]: time="2026-03-02T12:56:03.086797289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:03.166502 containerd[1586]: time="2026-03-02T12:56:03.166061001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-9lpgn,Uid:9e196f74-bd1b-4e0d-ac7a-5e3a7d334deb,Namespace:tigera-operator,Attempt:0,}" Mar 2 12:56:03.208913 containerd[1586]: time="2026-03-02T12:56:03.208792158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96p2m,Uid:7fbf4b29-8a16-4a35-b4d3-553bf8fcb350,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7467520499ccd6b0419b74f939a32320d532cff897dd62366ee4e5ac6c957fe\"" Mar 2 12:56:03.213970 kubelet[2761]: E0302 12:56:03.210373 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:03.255214 containerd[1586]: time="2026-03-02T12:56:03.255041597Z" level=info msg="CreateContainer within sandbox \"a7467520499ccd6b0419b74f939a32320d532cff897dd62366ee4e5ac6c957fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 12:56:03.260941 containerd[1586]: time="2026-03-02T12:56:03.260605363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:03.261507 containerd[1586]: time="2026-03-02T12:56:03.260904261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:03.261507 containerd[1586]: time="2026-03-02T12:56:03.261107792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:03.262272 containerd[1586]: time="2026-03-02T12:56:03.262005707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:03.290026 containerd[1586]: time="2026-03-02T12:56:03.289902071Z" level=info msg="CreateContainer within sandbox \"a7467520499ccd6b0419b74f939a32320d532cff897dd62366ee4e5ac6c957fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d2135c8d57bf64e91909dd35a643b04a402ba46053df8821c6bc75b5187f8be\"" Mar 2 12:56:03.292015 containerd[1586]: time="2026-03-02T12:56:03.291551981Z" level=info msg="StartContainer for \"3d2135c8d57bf64e91909dd35a643b04a402ba46053df8821c6bc75b5187f8be\"" Mar 2 12:56:03.525953 containerd[1586]: time="2026-03-02T12:56:03.525305685Z" level=info msg="StartContainer for \"3d2135c8d57bf64e91909dd35a643b04a402ba46053df8821c6bc75b5187f8be\" returns successfully" Mar 2 12:56:03.525953 containerd[1586]: time="2026-03-02T12:56:03.525341557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-9lpgn,Uid:9e196f74-bd1b-4e0d-ac7a-5e3a7d334deb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc7eb04a3b6d0b60256594ff8213fbcd098ee4834133f6ecff1d32006f8f44ec\"" Mar 2 12:56:03.530446 containerd[1586]: time="2026-03-02T12:56:03.530284528Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 12:56:03.960002 kubelet[2761]: E0302 12:56:03.959918 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:03.962876 kubelet[2761]: E0302 12:56:03.962216 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:06.557236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094195676.mount: Deactivated successfully. Mar 2 12:56:06.763903 kubelet[2761]: I0302 12:56:06.763715 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-96p2m" podStartSLOduration=4.763698722 podStartE2EDuration="4.763698722s" podCreationTimestamp="2026-03-02 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:04.006104787 +0000 UTC m=+7.431820848" watchObservedRunningTime="2026-03-02 12:56:06.763698722 +0000 UTC m=+10.189414752" Mar 2 12:56:08.693614 containerd[1586]: time="2026-03-02T12:56:08.693325948Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:08.695588 containerd[1586]: time="2026-03-02T12:56:08.695536210Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=40822719" Mar 2 12:56:08.696937 containerd[1586]: time="2026-03-02T12:56:08.696899527Z" level=info msg="ImageCreate event name:\"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:08.702082 containerd[1586]: time="2026-03-02T12:56:08.701989665Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:08.703320 containerd[1586]: time="2026-03-02T12:56:08.703205030Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"40818714\" in 5.172886971s" Mar 2 12:56:08.703320 containerd[1586]: time="2026-03-02T12:56:08.703274090Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\"" Mar 2 12:56:08.730275 containerd[1586]: time="2026-03-02T12:56:08.730097993Z" level=info msg="CreateContainer within sandbox \"bc7eb04a3b6d0b60256594ff8213fbcd098ee4834133f6ecff1d32006f8f44ec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 12:56:08.758709 containerd[1586]: time="2026-03-02T12:56:08.758619755Z" level=info msg="CreateContainer within sandbox \"bc7eb04a3b6d0b60256594ff8213fbcd098ee4834133f6ecff1d32006f8f44ec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb1220798a31a5f5e52deef07612e3978402d32e4ba89c0021a5ee2e867fff3a\"" Mar 2 12:56:08.759379 containerd[1586]: time="2026-03-02T12:56:08.759288133Z" level=info msg="StartContainer for \"cb1220798a31a5f5e52deef07612e3978402d32e4ba89c0021a5ee2e867fff3a\"" Mar 2 12:56:08.967333 containerd[1586]: time="2026-03-02T12:56:08.967159517Z" level=info msg="StartContainer for \"cb1220798a31a5f5e52deef07612e3978402d32e4ba89c0021a5ee2e867fff3a\" returns successfully" Mar 2 12:56:09.630177 kubelet[2761]: I0302 12:56:09.630056 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d4578d8d-9lpgn" podStartSLOduration=2.453680998 podStartE2EDuration="7.630034935s" podCreationTimestamp="2026-03-02 12:56:02 +0000 UTC" firstStartedPulling="2026-03-02 12:56:03.528713653 +0000 UTC m=+6.954429684" lastFinishedPulling="2026-03-02 12:56:08.705067581 +0000 UTC m=+12.130783621" observedRunningTime="2026-03-02 12:56:09.629589394 +0000 UTC m=+13.055305424" watchObservedRunningTime="2026-03-02 12:56:09.630034935 +0000 UTC m=+13.055750996" Mar 2 12:56:16.286120 sudo[1803]: pam_unix(sudo:session): session closed for user root Mar 2 12:56:16.290950 sshd[1796]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:16.303078 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:42060.service: Deactivated successfully. Mar 2 12:56:16.316232 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 12:56:16.326326 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Mar 2 12:56:16.334901 systemd-logind[1562]: Removed session 9. Mar 2 12:56:19.095378 kubelet[2761]: I0302 12:56:19.095255 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e8396c7-78f1-4fc2-b2b2-181a63a0d39d-tigera-ca-bundle\") pod \"calico-typha-5c74f4c5d4-4wczg\" (UID: \"9e8396c7-78f1-4fc2-b2b2-181a63a0d39d\") " pod="calico-system/calico-typha-5c74f4c5d4-4wczg" Mar 2 12:56:19.096105 kubelet[2761]: I0302 12:56:19.095462 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9e8396c7-78f1-4fc2-b2b2-181a63a0d39d-typha-certs\") pod \"calico-typha-5c74f4c5d4-4wczg\" (UID: \"9e8396c7-78f1-4fc2-b2b2-181a63a0d39d\") " pod="calico-system/calico-typha-5c74f4c5d4-4wczg" Mar 2 12:56:19.096105 kubelet[2761]: I0302 12:56:19.095497 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fv7z\" (UniqueName: \"kubernetes.io/projected/9e8396c7-78f1-4fc2-b2b2-181a63a0d39d-kube-api-access-5fv7z\") pod \"calico-typha-5c74f4c5d4-4wczg\" (UID: \"9e8396c7-78f1-4fc2-b2b2-181a63a0d39d\") " pod="calico-system/calico-typha-5c74f4c5d4-4wczg" Mar 2 12:56:19.172772 kubelet[2761]: E0302 12:56:19.172629 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:19.196934 kubelet[2761]: I0302 12:56:19.196774 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/55cf2921-5277-426e-8505-3a4a56621020-node-certs\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.196934 kubelet[2761]: I0302 12:56:19.196894 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-policysync\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.196934 kubelet[2761]: I0302 12:56:19.196925 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-flexvol-driver-host\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.196934 kubelet[2761]: I0302 12:56:19.196953 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-sys-fs\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197231 kubelet[2761]: I0302 12:56:19.196968 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkj4v\" (UniqueName: \"kubernetes.io/projected/55cf2921-5277-426e-8505-3a4a56621020-kube-api-access-lkj4v\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197231 kubelet[2761]: I0302 12:56:19.196997 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55cf2921-5277-426e-8505-3a4a56621020-tigera-ca-bundle\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197231 kubelet[2761]: I0302 12:56:19.197011 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-bpffs\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197231 kubelet[2761]: I0302 12:56:19.197026 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-cni-log-dir\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197231 kubelet[2761]: I0302 12:56:19.197041 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-var-lib-calico\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197698 kubelet[2761]: I0302 12:56:19.197081 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-cni-bin-dir\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197698 kubelet[2761]: I0302 12:56:19.197108 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-cni-net-dir\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197698 kubelet[2761]: I0302 12:56:19.197134 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-nodeproc\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197698 kubelet[2761]: I0302 12:56:19.197165 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-xtables-lock\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.197698 kubelet[2761]: I0302 12:56:19.197197 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-lib-modules\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.201195 kubelet[2761]: I0302 12:56:19.197299 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/55cf2921-5277-426e-8505-3a4a56621020-var-run-calico\") pod \"calico-node-mh7bq\" (UID: \"55cf2921-5277-426e-8505-3a4a56621020\") " pod="calico-system/calico-node-mh7bq" Mar 2 12:56:19.298634 kubelet[2761]: I0302 12:56:19.298461 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e862cd4-019e-430f-a2e4-79712cc8a730-registration-dir\") pod \"csi-node-driver-lczbs\" (UID: \"7e862cd4-019e-430f-a2e4-79712cc8a730\") " pod="calico-system/csi-node-driver-lczbs" Mar 2 12:56:19.299594 kubelet[2761]: I0302 12:56:19.298906 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7e862cd4-019e-430f-a2e4-79712cc8a730-varrun\") pod \"csi-node-driver-lczbs\" (UID: \"7e862cd4-019e-430f-a2e4-79712cc8a730\") " pod="calico-system/csi-node-driver-lczbs" Mar 2 12:56:19.299594 kubelet[2761]: I0302 12:56:19.298970 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e862cd4-019e-430f-a2e4-79712cc8a730-socket-dir\") pod \"csi-node-driver-lczbs\" (UID: \"7e862cd4-019e-430f-a2e4-79712cc8a730\") " pod="calico-system/csi-node-driver-lczbs" Mar 2 12:56:19.302343 kubelet[2761]: I0302 12:56:19.301335 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e862cd4-019e-430f-a2e4-79712cc8a730-kubelet-dir\") pod \"csi-node-driver-lczbs\" (UID: \"7e862cd4-019e-430f-a2e4-79712cc8a730\") " pod="calico-system/csi-node-driver-lczbs" Mar 2 12:56:19.302343 kubelet[2761]: I0302 12:56:19.301700 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5xg5\" (UniqueName: \"kubernetes.io/projected/7e862cd4-019e-430f-a2e4-79712cc8a730-kube-api-access-r5xg5\") pod \"csi-node-driver-lczbs\" (UID: \"7e862cd4-019e-430f-a2e4-79712cc8a730\") " pod="calico-system/csi-node-driver-lczbs" Mar 2 12:56:19.302990 kubelet[2761]: E0302 12:56:19.302888 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.302990 kubelet[2761]: W0302 12:56:19.302941 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.309167 kubelet[2761]: E0302 12:56:19.305609 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.309167 kubelet[2761]: E0302 12:56:19.306061 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.309167 kubelet[2761]: W0302 12:56:19.306076 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.309167 kubelet[2761]: E0302 12:56:19.306094 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.309167 kubelet[2761]: E0302 12:56:19.307101 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.309167 kubelet[2761]: W0302 12:56:19.307115 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.309167 kubelet[2761]: E0302 12:56:19.307135 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.309167 kubelet[2761]: E0302 12:56:19.308933 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.309167 kubelet[2761]: W0302 12:56:19.308949 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.309509 kubelet[2761]: E0302 12:56:19.308978 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.311449 kubelet[2761]: E0302 12:56:19.309716 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.311449 kubelet[2761]: W0302 12:56:19.309731 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.311449 kubelet[2761]: E0302 12:56:19.309743 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.316152 kubelet[2761]: E0302 12:56:19.315798 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.316152 kubelet[2761]: W0302 12:56:19.315882 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.316152 kubelet[2761]: E0302 12:56:19.315904 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.317644 kubelet[2761]: E0302 12:56:19.317538 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.317644 kubelet[2761]: W0302 12:56:19.317556 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.317644 kubelet[2761]: E0302 12:56:19.317570 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.321191 kubelet[2761]: E0302 12:56:19.318959 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.321191 kubelet[2761]: W0302 12:56:19.318978 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.321191 kubelet[2761]: E0302 12:56:19.319003 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.321191 kubelet[2761]: E0302 12:56:19.319448 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.321191 kubelet[2761]: W0302 12:56:19.319463 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.321191 kubelet[2761]: E0302 12:56:19.319475 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.321191 kubelet[2761]: E0302 12:56:19.319878 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.321191 kubelet[2761]: W0302 12:56:19.319888 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.321191 kubelet[2761]: E0302 12:56:19.319901 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.321191 kubelet[2761]: E0302 12:56:19.320264 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.321698 kubelet[2761]: W0302 12:56:19.320274 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.321698 kubelet[2761]: E0302 12:56:19.320283 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.321698 kubelet[2761]: E0302 12:56:19.320667 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.321698 kubelet[2761]: W0302 12:56:19.320677 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.321698 kubelet[2761]: E0302 12:56:19.320686 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.321698 kubelet[2761]: E0302 12:56:19.321038 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.321698 kubelet[2761]: W0302 12:56:19.321049 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.321698 kubelet[2761]: E0302 12:56:19.321059 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.322550 kubelet[2761]: E0302 12:56:19.322310 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:19.326679 kubelet[2761]: E0302 12:56:19.326346 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.326679 kubelet[2761]: W0302 12:56:19.326364 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.326679 kubelet[2761]: E0302 12:56:19.326378 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.326893 containerd[1586]: time="2026-03-02T12:56:19.326371314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c74f4c5d4-4wczg,Uid:9e8396c7-78f1-4fc2-b2b2-181a63a0d39d,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:19.371945 containerd[1586]: time="2026-03-02T12:56:19.370104394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:19.371945 containerd[1586]: time="2026-03-02T12:56:19.370651797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:19.371945 containerd[1586]: time="2026-03-02T12:56:19.370683876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:19.371945 containerd[1586]: time="2026-03-02T12:56:19.371249842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:19.373455 containerd[1586]: time="2026-03-02T12:56:19.372343002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mh7bq,Uid:55cf2921-5277-426e-8505-3a4a56621020,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:19.403564 kubelet[2761]: E0302 12:56:19.403346 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.403564 kubelet[2761]: W0302 12:56:19.403477 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.403564 kubelet[2761]: E0302 12:56:19.403508 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.406563 kubelet[2761]: E0302 12:56:19.404907 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.406563 kubelet[2761]: W0302 12:56:19.404920 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.406563 kubelet[2761]: E0302 12:56:19.404935 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.406563 kubelet[2761]: E0302 12:56:19.406329 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.406563 kubelet[2761]: W0302 12:56:19.406344 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.406563 kubelet[2761]: E0302 12:56:19.406467 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.406990 kubelet[2761]: E0302 12:56:19.406932 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.407054 kubelet[2761]: W0302 12:56:19.406993 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.407054 kubelet[2761]: E0302 12:56:19.407014 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.409479 kubelet[2761]: E0302 12:56:19.407667 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.409479 kubelet[2761]: W0302 12:56:19.407687 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.409479 kubelet[2761]: E0302 12:56:19.407702 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.409479 kubelet[2761]: E0302 12:56:19.408330 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.409479 kubelet[2761]: W0302 12:56:19.408342 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.409479 kubelet[2761]: E0302 12:56:19.408356 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.409708 kubelet[2761]: E0302 12:56:19.409581 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.409708 kubelet[2761]: W0302 12:56:19.409594 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.409708 kubelet[2761]: E0302 12:56:19.409610 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.410258 kubelet[2761]: E0302 12:56:19.410185 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.410258 kubelet[2761]: W0302 12:56:19.410242 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.410349 kubelet[2761]: E0302 12:56:19.410261 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.410971 kubelet[2761]: E0302 12:56:19.410892 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.410971 kubelet[2761]: W0302 12:56:19.410948 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.410971 kubelet[2761]: E0302 12:56:19.410964 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.411647 kubelet[2761]: E0302 12:56:19.411565 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.411647 kubelet[2761]: W0302 12:56:19.411618 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.411647 kubelet[2761]: E0302 12:56:19.411635 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.412181 kubelet[2761]: E0302 12:56:19.412102 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.412181 kubelet[2761]: W0302 12:56:19.412152 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.412181 kubelet[2761]: E0302 12:56:19.412171 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.412953 kubelet[2761]: E0302 12:56:19.412898 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.412953 kubelet[2761]: W0302 12:56:19.412948 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.413050 kubelet[2761]: E0302 12:56:19.412965 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.416475 kubelet[2761]: E0302 12:56:19.415143 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.416475 kubelet[2761]: W0302 12:56:19.415162 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.416475 kubelet[2761]: E0302 12:56:19.415301 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.417378 kubelet[2761]: E0302 12:56:19.416794 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.417378 kubelet[2761]: W0302 12:56:19.416811 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.417378 kubelet[2761]: E0302 12:56:19.416882 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.419636 kubelet[2761]: E0302 12:56:19.419153 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.419724 kubelet[2761]: W0302 12:56:19.419658 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.419724 kubelet[2761]: E0302 12:56:19.419677 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.420331 kubelet[2761]: E0302 12:56:19.420297 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.420331 kubelet[2761]: W0302 12:56:19.420326 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.420517 kubelet[2761]: E0302 12:56:19.420336 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.420930 kubelet[2761]: E0302 12:56:19.420892 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.420930 kubelet[2761]: W0302 12:56:19.420922 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.421024 kubelet[2761]: E0302 12:56:19.420934 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.421505 kubelet[2761]: E0302 12:56:19.421443 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.421590 kubelet[2761]: W0302 12:56:19.421548 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.421628 kubelet[2761]: E0302 12:56:19.421592 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.422045 kubelet[2761]: E0302 12:56:19.422012 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.422045 kubelet[2761]: W0302 12:56:19.422044 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.422122 kubelet[2761]: E0302 12:56:19.422054 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.422761 kubelet[2761]: E0302 12:56:19.422674 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.422761 kubelet[2761]: W0302 12:56:19.422708 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.422761 kubelet[2761]: E0302 12:56:19.422718 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.423255 kubelet[2761]: E0302 12:56:19.423132 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.423255 kubelet[2761]: W0302 12:56:19.423144 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.423255 kubelet[2761]: E0302 12:56:19.423152 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.423638 kubelet[2761]: E0302 12:56:19.423622 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.423638 kubelet[2761]: W0302 12:56:19.423632 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.423683 kubelet[2761]: E0302 12:56:19.423643 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.424115 kubelet[2761]: E0302 12:56:19.424036 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.424115 kubelet[2761]: W0302 12:56:19.424048 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.424115 kubelet[2761]: E0302 12:56:19.424058 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.425063 kubelet[2761]: E0302 12:56:19.425011 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.425063 kubelet[2761]: W0302 12:56:19.425024 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.425063 kubelet[2761]: E0302 12:56:19.425034 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.425735 kubelet[2761]: E0302 12:56:19.425613 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.425735 kubelet[2761]: W0302 12:56:19.425661 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.425735 kubelet[2761]: E0302 12:56:19.425676 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.435315 containerd[1586]: time="2026-03-02T12:56:19.434741262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:19.435315 containerd[1586]: time="2026-03-02T12:56:19.434869141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:19.435315 containerd[1586]: time="2026-03-02T12:56:19.434897705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:19.435315 containerd[1586]: time="2026-03-02T12:56:19.435027186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:19.442534 kubelet[2761]: E0302 12:56:19.442263 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:19.442534 kubelet[2761]: W0302 12:56:19.442280 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:19.442534 kubelet[2761]: E0302 12:56:19.442295 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:19.496969 containerd[1586]: time="2026-03-02T12:56:19.496790586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c74f4c5d4-4wczg,Uid:9e8396c7-78f1-4fc2-b2b2-181a63a0d39d,Namespace:calico-system,Attempt:0,} returns sandbox id \"82acbde1151b109106c6090bcb9208e0544ae172d3d57857321152d557d37f17\"" Mar 2 12:56:19.503663 kubelet[2761]: E0302 12:56:19.503484 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:19.508432 containerd[1586]: time="2026-03-02T12:56:19.505091199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 12:56:19.519708 containerd[1586]: time="2026-03-02T12:56:19.519527841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mh7bq,Uid:55cf2921-5277-426e-8505-3a4a56621020,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\"" Mar 2 12:56:20.748644 kubelet[2761]: E0302 12:56:20.748598 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:21.052241 containerd[1586]: time="2026-03-02T12:56:21.052081129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:21.053479 containerd[1586]: time="2026-03-02T12:56:21.053336202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=36094696" Mar 2 12:56:21.055197 containerd[1586]: time="2026-03-02T12:56:21.055151515Z" level=info msg="ImageCreate event name:\"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:21.057767 containerd[1586]: time="2026-03-02T12:56:21.057712106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:21.058682 containerd[1586]: time="2026-03-02T12:56:21.058562975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"36094550\" in 1.553436539s" Mar 2 12:56:21.058682 containerd[1586]: time="2026-03-02T12:56:21.058618347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\"" Mar 2 12:56:21.060239 containerd[1586]: time="2026-03-02T12:56:21.060142293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 12:56:21.076375 containerd[1586]: time="2026-03-02T12:56:21.076318095Z" level=info msg="CreateContainer within sandbox \"82acbde1151b109106c6090bcb9208e0544ae172d3d57857321152d557d37f17\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 12:56:21.096784 containerd[1586]: time="2026-03-02T12:56:21.096622994Z" level=info msg="CreateContainer within sandbox \"82acbde1151b109106c6090bcb9208e0544ae172d3d57857321152d557d37f17\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8856a31dbb2c404efb30578760671cd0ca801d3552c24de2cbc663392b3c621c\"" Mar 2 12:56:21.097375 containerd[1586]: time="2026-03-02T12:56:21.097333110Z" level=info msg="StartContainer for \"8856a31dbb2c404efb30578760671cd0ca801d3552c24de2cbc663392b3c621c\"" Mar 2 12:56:21.188596 containerd[1586]: time="2026-03-02T12:56:21.188469759Z" level=info msg="StartContainer for \"8856a31dbb2c404efb30578760671cd0ca801d3552c24de2cbc663392b3c621c\" returns successfully" Mar 2 12:56:21.671631 containerd[1586]: time="2026-03-02T12:56:21.671556572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:21.672613 containerd[1586]: time="2026-03-02T12:56:21.672574420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4630152" Mar 2 12:56:21.673713 containerd[1586]: time="2026-03-02T12:56:21.673670289Z" level=info msg="ImageCreate event name:\"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:21.677896 containerd[1586]: time="2026-03-02T12:56:21.677791841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:21.678615 containerd[1586]: time="2026-03-02T12:56:21.678569785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"6186157\" in 618.39997ms" Mar 2 12:56:21.678667 containerd[1586]: time="2026-03-02T12:56:21.678624497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\"" Mar 2 12:56:21.683904 kubelet[2761]: E0302 12:56:21.683816 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:21.683972 containerd[1586]: time="2026-03-02T12:56:21.683890376Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 12:56:21.696498 kubelet[2761]: I0302 12:56:21.695955 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c74f4c5d4-4wczg" podStartSLOduration=2.140783807 podStartE2EDuration="3.695942065s" podCreationTimestamp="2026-03-02 12:56:18 +0000 UTC" firstStartedPulling="2026-03-02 12:56:19.504800096 +0000 UTC m=+22.930516127" lastFinishedPulling="2026-03-02 12:56:21.059958355 +0000 UTC m=+24.485674385" observedRunningTime="2026-03-02 12:56:21.69590685 +0000 UTC m=+25.121622890" watchObservedRunningTime="2026-03-02 12:56:21.695942065 +0000 UTC m=+25.121658096" Mar 2 12:56:21.704289 containerd[1586]: time="2026-03-02T12:56:21.704254183Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f1cd29a532229ffee989da4a9c5b0d6d601f545d7173be083208e0de481d551d\"" Mar 2 12:56:21.704987 containerd[1586]: time="2026-03-02T12:56:21.704941158Z" level=info msg="StartContainer for \"f1cd29a532229ffee989da4a9c5b0d6d601f545d7173be083208e0de481d551d\"" Mar 2 12:56:21.716953 kubelet[2761]: E0302 12:56:21.716912 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.716953 kubelet[2761]: W0302 12:56:21.716952 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.717053 kubelet[2761]: E0302 12:56:21.716972 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.717491 kubelet[2761]: E0302 12:56:21.717354 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.717491 kubelet[2761]: W0302 12:56:21.717464 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.717491 kubelet[2761]: E0302 12:56:21.717478 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.717862 kubelet[2761]: E0302 12:56:21.717785 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.717940 kubelet[2761]: W0302 12:56:21.717906 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.717940 kubelet[2761]: E0302 12:56:21.717938 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.718344 kubelet[2761]: E0302 12:56:21.718310 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.718344 kubelet[2761]: W0302 12:56:21.718343 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.718474 kubelet[2761]: E0302 12:56:21.718355 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.718744 kubelet[2761]: E0302 12:56:21.718710 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.718744 kubelet[2761]: W0302 12:56:21.718743 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.718795 kubelet[2761]: E0302 12:56:21.718753 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.719139 kubelet[2761]: E0302 12:56:21.719110 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.719139 kubelet[2761]: W0302 12:56:21.719138 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.719190 kubelet[2761]: E0302 12:56:21.719147 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.719663 kubelet[2761]: E0302 12:56:21.719629 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.719663 kubelet[2761]: W0302 12:56:21.719662 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.719713 kubelet[2761]: E0302 12:56:21.719671 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.719991 kubelet[2761]: E0302 12:56:21.719958 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.719991 kubelet[2761]: W0302 12:56:21.719989 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.720040 kubelet[2761]: E0302 12:56:21.719998 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.720335 kubelet[2761]: E0302 12:56:21.720305 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.720335 kubelet[2761]: W0302 12:56:21.720333 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.720469 kubelet[2761]: E0302 12:56:21.720342 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.720811 kubelet[2761]: E0302 12:56:21.720790 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.720811 kubelet[2761]: W0302 12:56:21.720804 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.720907 kubelet[2761]: E0302 12:56:21.720816 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.721905 kubelet[2761]: E0302 12:56:21.721872 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.721905 kubelet[2761]: W0302 12:56:21.721904 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.721976 kubelet[2761]: E0302 12:56:21.721915 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.722371 kubelet[2761]: E0302 12:56:21.722209 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.722371 kubelet[2761]: W0302 12:56:21.722224 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.722371 kubelet[2761]: E0302 12:56:21.722235 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.726617 kubelet[2761]: E0302 12:56:21.726466 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.726617 kubelet[2761]: W0302 12:56:21.726481 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.726617 kubelet[2761]: E0302 12:56:21.726494 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.726951 kubelet[2761]: E0302 12:56:21.726937 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.727011 kubelet[2761]: W0302 12:56:21.727000 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.727063 kubelet[2761]: E0302 12:56:21.727053 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.727373 kubelet[2761]: E0302 12:56:21.727302 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.727373 kubelet[2761]: W0302 12:56:21.727313 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.727373 kubelet[2761]: E0302 12:56:21.727322 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.730796 kubelet[2761]: E0302 12:56:21.730695 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.730796 kubelet[2761]: W0302 12:56:21.730729 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.730796 kubelet[2761]: E0302 12:56:21.730740 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.731055 kubelet[2761]: E0302 12:56:21.731040 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.731152 kubelet[2761]: W0302 12:56:21.731113 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.731152 kubelet[2761]: E0302 12:56:21.731158 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.731660 kubelet[2761]: E0302 12:56:21.731629 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.731755 kubelet[2761]: W0302 12:56:21.731660 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.731755 kubelet[2761]: E0302 12:56:21.731731 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.732584 kubelet[2761]: E0302 12:56:21.732480 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.732584 kubelet[2761]: W0302 12:56:21.732514 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.732584 kubelet[2761]: E0302 12:56:21.732525 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.733226 kubelet[2761]: E0302 12:56:21.733127 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.733226 kubelet[2761]: W0302 12:56:21.733156 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.733226 kubelet[2761]: E0302 12:56:21.733166 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.733813 kubelet[2761]: E0302 12:56:21.733713 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.733813 kubelet[2761]: W0302 12:56:21.733742 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.733813 kubelet[2761]: E0302 12:56:21.733752 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.734299 kubelet[2761]: E0302 12:56:21.734139 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.734299 kubelet[2761]: W0302 12:56:21.734151 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.734299 kubelet[2761]: E0302 12:56:21.734160 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.734645 kubelet[2761]: E0302 12:56:21.734530 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.734645 kubelet[2761]: W0302 12:56:21.734560 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.734645 kubelet[2761]: E0302 12:56:21.734572 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.735205 kubelet[2761]: E0302 12:56:21.735067 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.735205 kubelet[2761]: W0302 12:56:21.735099 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.735205 kubelet[2761]: E0302 12:56:21.735109 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.735756 kubelet[2761]: E0302 12:56:21.735608 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.735756 kubelet[2761]: W0302 12:56:21.735641 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.735756 kubelet[2761]: E0302 12:56:21.735651 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.736063 kubelet[2761]: E0302 12:56:21.736048 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.736260 kubelet[2761]: W0302 12:56:21.736109 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.736260 kubelet[2761]: E0302 12:56:21.736122 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.736898 kubelet[2761]: E0302 12:56:21.736670 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.736898 kubelet[2761]: W0302 12:56:21.736683 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.736898 kubelet[2761]: E0302 12:56:21.736693 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.737155 kubelet[2761]: E0302 12:56:21.737139 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.738014 kubelet[2761]: W0302 12:56:21.737810 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.738014 kubelet[2761]: E0302 12:56:21.737872 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.738299 kubelet[2761]: E0302 12:56:21.738286 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.738360 kubelet[2761]: W0302 12:56:21.738349 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.738609 kubelet[2761]: E0302 12:56:21.738461 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.739029 kubelet[2761]: E0302 12:56:21.738870 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.739029 kubelet[2761]: W0302 12:56:21.738882 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.739029 kubelet[2761]: E0302 12:56:21.738892 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.739199 kubelet[2761]: E0302 12:56:21.739187 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.739256 kubelet[2761]: W0302 12:56:21.739245 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.739300 kubelet[2761]: E0302 12:56:21.739290 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.739780 kubelet[2761]: E0302 12:56:21.739767 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.739885 kubelet[2761]: W0302 12:56:21.739872 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.739935 kubelet[2761]: E0302 12:56:21.739922 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.740523 kubelet[2761]: E0302 12:56:21.740510 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:56:21.740578 kubelet[2761]: W0302 12:56:21.740568 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:56:21.740620 kubelet[2761]: E0302 12:56:21.740610 2761 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:56:21.789076 containerd[1586]: time="2026-03-02T12:56:21.788985332Z" level=info msg="StartContainer for \"f1cd29a532229ffee989da4a9c5b0d6d601f545d7173be083208e0de481d551d\" returns successfully" Mar 2 12:56:21.846632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1cd29a532229ffee989da4a9c5b0d6d601f545d7173be083208e0de481d551d-rootfs.mount: Deactivated successfully. Mar 2 12:56:21.937321 containerd[1586]: time="2026-03-02T12:56:21.937061772Z" level=info msg="shim disconnected" id=f1cd29a532229ffee989da4a9c5b0d6d601f545d7173be083208e0de481d551d namespace=k8s.io Mar 2 12:56:21.937321 containerd[1586]: time="2026-03-02T12:56:21.937191525Z" level=warning msg="cleaning up after shim disconnected" id=f1cd29a532229ffee989da4a9c5b0d6d601f545d7173be083208e0de481d551d namespace=k8s.io Mar 2 12:56:21.937321 containerd[1586]: time="2026-03-02T12:56:21.937207915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:56:22.686157 kubelet[2761]: I0302 12:56:22.686124 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:56:22.686732 kubelet[2761]: E0302 12:56:22.686612 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:22.688743 containerd[1586]: time="2026-03-02T12:56:22.688710166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 12:56:22.747660 kubelet[2761]: E0302 12:56:22.747593 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:24.754469 kubelet[2761]: E0302 12:56:24.754360 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:26.746705 kubelet[2761]: E0302 12:56:26.746618 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:28.135287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717188776.mount: Deactivated successfully. Mar 2 12:56:28.296293 containerd[1586]: time="2026-03-02T12:56:28.296122055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:28.297232 containerd[1586]: time="2026-03-02T12:56:28.297167838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=159483365" Mar 2 12:56:28.299063 containerd[1586]: time="2026-03-02T12:56:28.298945863Z" level=info msg="ImageCreate event name:\"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:28.302710 containerd[1586]: time="2026-03-02T12:56:28.302593277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:28.303298 containerd[1586]: time="2026-03-02T12:56:28.303171392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"159483227\" in 5.614421182s" Mar 2 12:56:28.303298 containerd[1586]: time="2026-03-02T12:56:28.303233869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\"" Mar 2 12:56:28.316361 containerd[1586]: time="2026-03-02T12:56:28.316257624Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 12:56:28.404506 containerd[1586]: time="2026-03-02T12:56:28.404187784Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"0d2b74bd279a2e63cddbb53806b9a427a5af70df0a1676395430eb23187cac4a\"" Mar 2 12:56:28.405883 containerd[1586]: time="2026-03-02T12:56:28.405797605Z" level=info msg="StartContainer for \"0d2b74bd279a2e63cddbb53806b9a427a5af70df0a1676395430eb23187cac4a\"" Mar 2 12:56:28.760563 kubelet[2761]: E0302 12:56:28.759773 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:29.232627 containerd[1586]: time="2026-03-02T12:56:29.232464221Z" level=info msg="StartContainer for \"0d2b74bd279a2e63cddbb53806b9a427a5af70df0a1676395430eb23187cac4a\" returns successfully" Mar 2 12:56:29.358202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2b74bd279a2e63cddbb53806b9a427a5af70df0a1676395430eb23187cac4a-rootfs.mount: Deactivated successfully. Mar 2 12:56:29.362738 containerd[1586]: time="2026-03-02T12:56:29.362643272Z" level=info msg="shim disconnected" id=0d2b74bd279a2e63cddbb53806b9a427a5af70df0a1676395430eb23187cac4a namespace=k8s.io Mar 2 12:56:29.363240 containerd[1586]: time="2026-03-02T12:56:29.362749460Z" level=warning msg="cleaning up after shim disconnected" id=0d2b74bd279a2e63cddbb53806b9a427a5af70df0a1676395430eb23187cac4a namespace=k8s.io Mar 2 12:56:29.363240 containerd[1586]: time="2026-03-02T12:56:29.362768305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:56:29.843739 containerd[1586]: time="2026-03-02T12:56:29.843651896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 12:56:30.746619 kubelet[2761]: E0302 12:56:30.746484 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:32.148929 containerd[1586]: time="2026-03-02T12:56:32.148717126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:32.150896 containerd[1586]: time="2026-03-02T12:56:32.150761393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=70584418" Mar 2 12:56:32.152224 containerd[1586]: time="2026-03-02T12:56:32.152148373Z" level=info msg="ImageCreate event name:\"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:32.156162 containerd[1586]: time="2026-03-02T12:56:32.156076687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:32.156809 containerd[1586]: time="2026-03-02T12:56:32.156713286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"72140463\" in 2.312996179s" Mar 2 12:56:32.156809 containerd[1586]: time="2026-03-02T12:56:32.156785862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\"" Mar 2 12:56:32.163694 containerd[1586]: time="2026-03-02T12:56:32.163562479Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 12:56:32.186760 containerd[1586]: time="2026-03-02T12:56:32.186660991Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"09e104b1281111c63cde11d51252976cb18908fa0a68c8cfcb832abe7680eed3\"" Mar 2 12:56:32.188549 containerd[1586]: time="2026-03-02T12:56:32.187678456Z" level=info msg="StartContainer for \"09e104b1281111c63cde11d51252976cb18908fa0a68c8cfcb832abe7680eed3\"" Mar 2 12:56:32.297678 containerd[1586]: time="2026-03-02T12:56:32.297104068Z" level=info msg="StartContainer for \"09e104b1281111c63cde11d51252976cb18908fa0a68c8cfcb832abe7680eed3\" returns successfully" Mar 2 12:56:32.747458 kubelet[2761]: E0302 12:56:32.747300 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lczbs" podUID="7e862cd4-019e-430f-a2e4-79712cc8a730" Mar 2 12:56:33.312740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09e104b1281111c63cde11d51252976cb18908fa0a68c8cfcb832abe7680eed3-rootfs.mount: Deactivated successfully. Mar 2 12:56:33.314502 containerd[1586]: time="2026-03-02T12:56:33.314337502Z" level=info msg="shim disconnected" id=09e104b1281111c63cde11d51252976cb18908fa0a68c8cfcb832abe7680eed3 namespace=k8s.io Mar 2 12:56:33.315048 containerd[1586]: time="2026-03-02T12:56:33.314503662Z" level=warning msg="cleaning up after shim disconnected" id=09e104b1281111c63cde11d51252976cb18908fa0a68c8cfcb832abe7680eed3 namespace=k8s.io Mar 2 12:56:33.315048 containerd[1586]: time="2026-03-02T12:56:33.314516025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:56:33.355133 kubelet[2761]: I0302 12:56:33.354895 2761 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 2 12:56:33.472942 kubelet[2761]: I0302 12:56:33.471700 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27ea198-031e-421a-9756-e262b0869b53-config\") pod \"goldmane-9566f57b5-rdrh7\" (UID: \"c27ea198-031e-421a-9756-e262b0869b53\") " pod="calico-system/goldmane-9566f57b5-rdrh7" Mar 2 12:56:33.472942 kubelet[2761]: I0302 12:56:33.471753 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f4839b9f-029a-4d28-b714-da8fd2fa861e-kube-api-access-jldjj\") pod \"whisker-85c99cfd46-7khdv\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " pod="calico-system/whisker-85c99cfd46-7khdv" Mar 2 12:56:33.472942 kubelet[2761]: I0302 12:56:33.471786 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20530e0e-7523-46f2-bf7b-30bc40bef15b-tigera-ca-bundle\") pod \"calico-kube-controllers-6c5c78c78-476fj\" (UID: \"20530e0e-7523-46f2-bf7b-30bc40bef15b\") " pod="calico-system/calico-kube-controllers-6c5c78c78-476fj" Mar 2 12:56:33.472942 kubelet[2761]: I0302 12:56:33.471812 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c27ea198-031e-421a-9756-e262b0869b53-goldmane-key-pair\") pod \"goldmane-9566f57b5-rdrh7\" (UID: \"c27ea198-031e-421a-9756-e262b0869b53\") " pod="calico-system/goldmane-9566f57b5-rdrh7" Mar 2 12:56:33.472942 kubelet[2761]: I0302 12:56:33.471896 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-backend-key-pair\") pod \"whisker-85c99cfd46-7khdv\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " pod="calico-system/whisker-85c99cfd46-7khdv" Mar 2 12:56:33.473268 kubelet[2761]: I0302 12:56:33.471927 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9tbj\" (UniqueName: \"kubernetes.io/projected/3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e-kube-api-access-r9tbj\") pod \"calico-apiserver-677b948c89-kzgtl\" (UID: \"3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e\") " pod="calico-system/calico-apiserver-677b948c89-kzgtl" Mar 2 12:56:33.473268 kubelet[2761]: I0302 12:56:33.471957 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57fb18fa-6d09-4965-b41b-6c5cac95f136-config-volume\") pod \"coredns-674b8bbfcf-sxcg5\" (UID: \"57fb18fa-6d09-4965-b41b-6c5cac95f136\") " pod="kube-system/coredns-674b8bbfcf-sxcg5" Mar 2 12:56:33.473268 kubelet[2761]: I0302 12:56:33.471983 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgwqv\" (UniqueName: \"kubernetes.io/projected/c27ea198-031e-421a-9756-e262b0869b53-kube-api-access-vgwqv\") pod \"goldmane-9566f57b5-rdrh7\" (UID: \"c27ea198-031e-421a-9756-e262b0869b53\") " pod="calico-system/goldmane-9566f57b5-rdrh7" Mar 2 12:56:33.473268 kubelet[2761]: I0302 12:56:33.472012 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b727e3-efa9-44aa-a2c1-a5653c8e04db-config-volume\") pod \"coredns-674b8bbfcf-cv6q7\" (UID: \"73b727e3-efa9-44aa-a2c1-a5653c8e04db\") " pod="kube-system/coredns-674b8bbfcf-cv6q7" Mar 2 12:56:33.473268 kubelet[2761]: I0302 12:56:33.472038 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shf6q\" (UniqueName: \"kubernetes.io/projected/73b727e3-efa9-44aa-a2c1-a5653c8e04db-kube-api-access-shf6q\") pod \"coredns-674b8bbfcf-cv6q7\" (UID: \"73b727e3-efa9-44aa-a2c1-a5653c8e04db\") " pod="kube-system/coredns-674b8bbfcf-cv6q7" Mar 2 12:56:33.473816 kubelet[2761]: I0302 12:56:33.472063 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a848f00-f6ec-4385-a50f-239a27273d12-calico-apiserver-certs\") pod \"calico-apiserver-677b948c89-7z5vf\" (UID: \"5a848f00-f6ec-4385-a50f-239a27273d12\") " pod="calico-system/calico-apiserver-677b948c89-7z5vf" Mar 2 12:56:33.473816 kubelet[2761]: I0302 12:56:33.472095 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dml4q\" (UniqueName: \"kubernetes.io/projected/20530e0e-7523-46f2-bf7b-30bc40bef15b-kube-api-access-dml4q\") pod \"calico-kube-controllers-6c5c78c78-476fj\" (UID: \"20530e0e-7523-46f2-bf7b-30bc40bef15b\") " pod="calico-system/calico-kube-controllers-6c5c78c78-476fj" Mar 2 12:56:33.473816 kubelet[2761]: I0302 12:56:33.472122 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c27ea198-031e-421a-9756-e262b0869b53-goldmane-ca-bundle\") pod \"goldmane-9566f57b5-rdrh7\" (UID: \"c27ea198-031e-421a-9756-e262b0869b53\") " pod="calico-system/goldmane-9566f57b5-rdrh7" Mar 2 12:56:33.473816 kubelet[2761]: I0302 12:56:33.472147 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e-calico-apiserver-certs\") pod \"calico-apiserver-677b948c89-kzgtl\" (UID: \"3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e\") " pod="calico-system/calico-apiserver-677b948c89-kzgtl" Mar 2 12:56:33.473816 kubelet[2761]: I0302 12:56:33.472176 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8xn\" (UniqueName: \"kubernetes.io/projected/57fb18fa-6d09-4965-b41b-6c5cac95f136-kube-api-access-2g8xn\") pod \"coredns-674b8bbfcf-sxcg5\" (UID: \"57fb18fa-6d09-4965-b41b-6c5cac95f136\") " pod="kube-system/coredns-674b8bbfcf-sxcg5" Mar 2 12:56:33.474074 kubelet[2761]: I0302 12:56:33.472293 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-nginx-config\") pod \"whisker-85c99cfd46-7khdv\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " pod="calico-system/whisker-85c99cfd46-7khdv" Mar 2 12:56:33.474074 kubelet[2761]: I0302 12:56:33.472324 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-ca-bundle\") pod \"whisker-85c99cfd46-7khdv\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " pod="calico-system/whisker-85c99cfd46-7khdv" Mar 2 12:56:33.474074 kubelet[2761]: I0302 12:56:33.472353 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-526vz\" (UniqueName: \"kubernetes.io/projected/5a848f00-f6ec-4385-a50f-239a27273d12-kube-api-access-526vz\") pod \"calico-apiserver-677b948c89-7z5vf\" (UID: \"5a848f00-f6ec-4385-a50f-239a27273d12\") " pod="calico-system/calico-apiserver-677b948c89-7z5vf" Mar 2 12:56:33.719252 kubelet[2761]: E0302 12:56:33.719115 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:33.719992 containerd[1586]: time="2026-03-02T12:56:33.719812740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxcg5,Uid:57fb18fa-6d09-4965-b41b-6c5cac95f136,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:33.732581 containerd[1586]: time="2026-03-02T12:56:33.732500544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-kzgtl,Uid:3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:33.732879 containerd[1586]: time="2026-03-02T12:56:33.732538840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5c78c78-476fj,Uid:20530e0e-7523-46f2-bf7b-30bc40bef15b,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:33.752639 kubelet[2761]: E0302 12:56:33.752596 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:33.753747 containerd[1586]: time="2026-03-02T12:56:33.753683693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-rdrh7,Uid:c27ea198-031e-421a-9756-e262b0869b53,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:33.772493 containerd[1586]: time="2026-03-02T12:56:33.772336584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cv6q7,Uid:73b727e3-efa9-44aa-a2c1-a5653c8e04db,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:33.773044 containerd[1586]: time="2026-03-02T12:56:33.772978703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c99cfd46-7khdv,Uid:f4839b9f-029a-4d28-b714-da8fd2fa861e,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:33.773375 containerd[1586]: time="2026-03-02T12:56:33.773259728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-7z5vf,Uid:5a848f00-f6ec-4385-a50f-239a27273d12,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:33.937241 containerd[1586]: time="2026-03-02T12:56:33.937052067Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 12:56:33.965054 containerd[1586]: time="2026-03-02T12:56:33.964941879Z" level=info msg="CreateContainer within sandbox \"7c92508f1467672be553ba06b11a6364596c25cc0eddd0931eba460b91012955\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"01680715517420f99aaec629bcd3b8f2caff1f865faaf0c30b4a6301304ee8c8\"" Mar 2 12:56:33.966561 containerd[1586]: time="2026-03-02T12:56:33.966056984Z" level=info msg="StartContainer for \"01680715517420f99aaec629bcd3b8f2caff1f865faaf0c30b4a6301304ee8c8\"" Mar 2 12:56:34.080098 containerd[1586]: time="2026-03-02T12:56:34.080023178Z" level=error msg="Failed to destroy network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.082471 containerd[1586]: time="2026-03-02T12:56:34.082369340Z" level=error msg="encountered an error cleaning up failed sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.103051 containerd[1586]: time="2026-03-02T12:56:34.102922240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxcg5,Uid:57fb18fa-6d09-4965-b41b-6c5cac95f136,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.124746 containerd[1586]: time="2026-03-02T12:56:34.124643800Z" level=error msg="Failed to destroy network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.125353 containerd[1586]: time="2026-03-02T12:56:34.125320305Z" level=error msg="Failed to destroy network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.127879 containerd[1586]: time="2026-03-02T12:56:34.127064539Z" level=error msg="encountered an error cleaning up failed sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.127879 containerd[1586]: time="2026-03-02T12:56:34.127120644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-7z5vf,Uid:5a848f00-f6ec-4385-a50f-239a27273d12,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.127879 containerd[1586]: time="2026-03-02T12:56:34.124660003Z" level=error msg="Failed to destroy network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.129450 kubelet[2761]: E0302 12:56:34.129352 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.129623 kubelet[2761]: E0302 12:56:34.129529 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.129686 kubelet[2761]: E0302 12:56:34.129646 2761 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sxcg5" Mar 2 12:56:34.129805 kubelet[2761]: E0302 12:56:34.129728 2761 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sxcg5" Mar 2 12:56:34.129947 kubelet[2761]: E0302 12:56:34.129875 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sxcg5_kube-system(57fb18fa-6d09-4965-b41b-6c5cac95f136)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sxcg5_kube-system(57fb18fa-6d09-4965-b41b-6c5cac95f136)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sxcg5" podUID="57fb18fa-6d09-4965-b41b-6c5cac95f136" Mar 2 12:56:34.129947 kubelet[2761]: E0302 12:56:34.129746 2761 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677b948c89-7z5vf" Mar 2 12:56:34.130080 kubelet[2761]: E0302 12:56:34.129952 2761 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677b948c89-7z5vf" Mar 2 12:56:34.130080 kubelet[2761]: E0302 12:56:34.129984 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677b948c89-7z5vf_calico-system(5a848f00-f6ec-4385-a50f-239a27273d12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677b948c89-7z5vf_calico-system(5a848f00-f6ec-4385-a50f-239a27273d12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677b948c89-7z5vf" podUID="5a848f00-f6ec-4385-a50f-239a27273d12" Mar 2 12:56:34.132275 containerd[1586]: time="2026-03-02T12:56:34.130978647Z" level=error msg="encountered an error cleaning up failed sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.132275 containerd[1586]: time="2026-03-02T12:56:34.131064237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c99cfd46-7khdv,Uid:f4839b9f-029a-4d28-b714-da8fd2fa861e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.132275 containerd[1586]: time="2026-03-02T12:56:34.131455918Z" level=error msg="encountered an error cleaning up failed sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.132275 containerd[1586]: time="2026-03-02T12:56:34.131490393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cv6q7,Uid:73b727e3-efa9-44aa-a2c1-a5653c8e04db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.132544 kubelet[2761]: E0302 12:56:34.132109 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.133695 kubelet[2761]: E0302 12:56:34.132249 2761 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c99cfd46-7khdv" Mar 2 12:56:34.133909 kubelet[2761]: E0302 12:56:34.133772 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.133909 kubelet[2761]: E0302 12:56:34.133902 2761 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cv6q7" Mar 2 12:56:34.134072 kubelet[2761]: E0302 12:56:34.133922 2761 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cv6q7" Mar 2 12:56:34.134072 kubelet[2761]: E0302 12:56:34.133960 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cv6q7_kube-system(73b727e3-efa9-44aa-a2c1-a5653c8e04db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cv6q7_kube-system(73b727e3-efa9-44aa-a2c1-a5653c8e04db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cv6q7" podUID="73b727e3-efa9-44aa-a2c1-a5653c8e04db" Mar 2 12:56:34.134072 kubelet[2761]: E0302 12:56:34.134037 2761 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c99cfd46-7khdv" Mar 2 12:56:34.134284 kubelet[2761]: E0302 12:56:34.134153 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85c99cfd46-7khdv_calico-system(f4839b9f-029a-4d28-b714-da8fd2fa861e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85c99cfd46-7khdv_calico-system(f4839b9f-029a-4d28-b714-da8fd2fa861e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85c99cfd46-7khdv" podUID="f4839b9f-029a-4d28-b714-da8fd2fa861e" Mar 2 12:56:34.160107 containerd[1586]: time="2026-03-02T12:56:34.159965441Z" level=error msg="Failed to destroy network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.175289 containerd[1586]: time="2026-03-02T12:56:34.175251144Z" level=info msg="StartContainer for \"01680715517420f99aaec629bcd3b8f2caff1f865faaf0c30b4a6301304ee8c8\" returns successfully" Mar 2 12:56:34.175947 containerd[1586]: time="2026-03-02T12:56:34.175813151Z" level=error msg="encountered an error cleaning up failed sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.175947 containerd[1586]: time="2026-03-02T12:56:34.175927504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-kzgtl,Uid:3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.176240 kubelet[2761]: E0302 12:56:34.176158 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.176335 kubelet[2761]: E0302 12:56:34.176255 2761 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677b948c89-kzgtl" Mar 2 12:56:34.176335 kubelet[2761]: E0302 12:56:34.176278 2761 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677b948c89-kzgtl" Mar 2 12:56:34.176500 kubelet[2761]: E0302 12:56:34.176327 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677b948c89-kzgtl_calico-system(3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677b948c89-kzgtl_calico-system(3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677b948c89-kzgtl" podUID="3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e" Mar 2 12:56:34.195678 kubelet[2761]: I0302 12:56:34.195517 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:56:34.203909 kubelet[2761]: E0302 12:56:34.203460 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:34.204465 containerd[1586]: time="2026-03-02T12:56:34.203703748Z" level=error msg="Failed to destroy network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.205518 containerd[1586]: time="2026-03-02T12:56:34.205490563Z" level=error msg="encountered an error cleaning up failed sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.206002 containerd[1586]: time="2026-03-02T12:56:34.205939962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-rdrh7,Uid:c27ea198-031e-421a-9756-e262b0869b53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.207358 containerd[1586]: time="2026-03-02T12:56:34.206485160Z" level=error msg="Failed to destroy network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.209075 containerd[1586]: time="2026-03-02T12:56:34.208637479Z" level=error msg="encountered an error cleaning up failed sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.209075 containerd[1586]: time="2026-03-02T12:56:34.208702621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5c78c78-476fj,Uid:20530e0e-7523-46f2-bf7b-30bc40bef15b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.209273 kubelet[2761]: E0302 12:56:34.208536 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.209273 kubelet[2761]: E0302 12:56:34.208907 2761 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9566f57b5-rdrh7" Mar 2 12:56:34.209273 kubelet[2761]: E0302 12:56:34.208944 2761 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9566f57b5-rdrh7" Mar 2 12:56:34.209531 kubelet[2761]: E0302 12:56:34.209023 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9566f57b5-rdrh7_calico-system(c27ea198-031e-421a-9756-e262b0869b53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9566f57b5-rdrh7_calico-system(c27ea198-031e-421a-9756-e262b0869b53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9566f57b5-rdrh7" podUID="c27ea198-031e-421a-9756-e262b0869b53" Mar 2 12:56:34.209531 kubelet[2761]: E0302 12:56:34.209061 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:56:34.209531 kubelet[2761]: E0302 12:56:34.209108 2761 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c5c78c78-476fj" Mar 2 12:56:34.209823 kubelet[2761]: E0302 12:56:34.209127 2761 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c5c78c78-476fj" Mar 2 12:56:34.209823 kubelet[2761]: E0302 12:56:34.209164 2761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c5c78c78-476fj_calico-system(20530e0e-7523-46f2-bf7b-30bc40bef15b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c5c78c78-476fj_calico-system(20530e0e-7523-46f2-bf7b-30bc40bef15b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c5c78c78-476fj" podUID="20530e0e-7523-46f2-bf7b-30bc40bef15b" Mar 2 12:56:34.753803 containerd[1586]: time="2026-03-02T12:56:34.753766343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lczbs,Uid:7e862cd4-019e-430f-a2e4-79712cc8a730,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:34.898642 kubelet[2761]: I0302 12:56:34.898463 2761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:34.900706 kubelet[2761]: I0302 12:56:34.900685 2761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:34.905774 kubelet[2761]: I0302 12:56:34.905539 2761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:34.910468 containerd[1586]: time="2026-03-02T12:56:34.910105533Z" level=info msg="StopPodSandbox for \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\"" Mar 2 12:56:34.912568 containerd[1586]: time="2026-03-02T12:56:34.912361814Z" level=info msg="StopPodSandbox for \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\"" Mar 2 12:56:34.913992 kubelet[2761]: I0302 12:56:34.913672 2761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:34.914797 containerd[1586]: time="2026-03-02T12:56:34.914746969Z" level=info msg="StopPodSandbox for \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\"" Mar 2 12:56:34.918102 containerd[1586]: time="2026-03-02T12:56:34.917241705Z" level=info msg="StopPodSandbox for \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\"" Mar 2 12:56:34.924581 containerd[1586]: time="2026-03-02T12:56:34.924334539Z" level=info msg="Ensure that sandbox 73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6 in task-service has been cleanup successfully" Mar 2 12:56:34.924581 containerd[1586]: time="2026-03-02T12:56:34.924336975Z" level=info msg="Ensure that sandbox 2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd in task-service has been cleanup successfully" Mar 2 12:56:34.925066 containerd[1586]: time="2026-03-02T12:56:34.925039226Z" level=info msg="Ensure that sandbox 9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f in task-service has been cleanup successfully" Mar 2 12:56:34.926080 containerd[1586]: time="2026-03-02T12:56:34.924359100Z" level=info msg="Ensure that sandbox 57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54 in task-service has been cleanup successfully" Mar 2 12:56:34.968662 kubelet[2761]: I0302 12:56:34.968522 2761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:34.973673 containerd[1586]: time="2026-03-02T12:56:34.973520561Z" level=info msg="StopPodSandbox for \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\"" Mar 2 12:56:34.974636 containerd[1586]: time="2026-03-02T12:56:34.973812456Z" level=info msg="Ensure that sandbox 8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55 in task-service has been cleanup successfully" Mar 2 12:56:34.977138 kubelet[2761]: I0302 12:56:34.976952 2761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:34.989748 containerd[1586]: time="2026-03-02T12:56:34.989663021Z" level=info msg="StopPodSandbox for \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\"" Mar 2 12:56:34.990167 containerd[1586]: time="2026-03-02T12:56:34.989965897Z" level=info msg="Ensure that sandbox 76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241 in task-service has been cleanup successfully" Mar 2 12:56:35.013125 kubelet[2761]: I0302 12:56:35.012131 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mh7bq" podStartSLOduration=3.37555533 podStartE2EDuration="16.01210788s" podCreationTimestamp="2026-03-02 12:56:19 +0000 UTC" firstStartedPulling="2026-03-02 12:56:19.521330961 +0000 UTC m=+22.947046991" lastFinishedPulling="2026-03-02 12:56:32.157883511 +0000 UTC m=+35.583599541" observedRunningTime="2026-03-02 12:56:34.983888429 +0000 UTC m=+38.409604459" watchObservedRunningTime="2026-03-02 12:56:35.01210788 +0000 UTC m=+38.437823930" Mar 2 12:56:35.024649 kubelet[2761]: E0302 12:56:35.024473 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:35.027066 kubelet[2761]: I0302 12:56:35.026886 2761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:35.033971 containerd[1586]: time="2026-03-02T12:56:35.029251367Z" level=info msg="StopPodSandbox for \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\"" Mar 2 12:56:35.033971 containerd[1586]: time="2026-03-02T12:56:35.029547921Z" level=info msg="Ensure that sandbox 3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a in task-service has been cleanup successfully" Mar 2 12:56:35.098906 systemd[1]: run-containerd-runc-k8s.io-01680715517420f99aaec629bcd3b8f2caff1f865faaf0c30b4a6301304ee8c8-runc.yYNh7E.mount: Deactivated successfully. Mar 2 12:56:35.126025 systemd-networkd[1244]: cali8475350a559: Link UP Mar 2 12:56:35.127310 systemd-networkd[1244]: cali8475350a559: Gained carrier Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.813 [ERROR][3912] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.846 [INFO][3912] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lczbs-eth0 csi-node-driver- calico-system 7e862cd4-019e-430f-a2e4-79712cc8a730 731 0 2026-03-02 12:56:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7494d65b57 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lczbs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8475350a559 [] [] }} ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.846 [INFO][3912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-eth0" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.897 [INFO][3926] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" HandleID="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Workload="localhost-k8s-csi--node--driver--lczbs-eth0" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.923 [INFO][3926] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" HandleID="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Workload="localhost-k8s-csi--node--driver--lczbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lczbs", "timestamp":"2026-03-02 12:56:34.89774696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a6420)} Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.924 [INFO][3926] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.924 [INFO][3926] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.924 [INFO][3926] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.933 [INFO][3926] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:34.956 [INFO][3926] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.000 [INFO][3926] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.030 [INFO][3926] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.037 [INFO][3926] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.037 [INFO][3926] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.042 [INFO][3926] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.048 [INFO][3926] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.062 [INFO][3926] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.062 [INFO][3926] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" host="localhost" Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.062 [INFO][3926] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.217460 containerd[1586]: 2026-03-02 12:56:35.062 [INFO][3926] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" HandleID="k8s-pod-network.0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Workload="localhost-k8s-csi--node--driver--lczbs-eth0" Mar 2 12:56:35.218637 containerd[1586]: 2026-03-02 12:56:35.079 [INFO][3912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lczbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e862cd4-019e-430f-a2e4-79712cc8a730", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lczbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8475350a559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:35.218637 containerd[1586]: 2026-03-02 12:56:35.080 [INFO][3912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-eth0" Mar 2 12:56:35.218637 containerd[1586]: 2026-03-02 12:56:35.080 [INFO][3912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8475350a559 ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-eth0" Mar 2 12:56:35.218637 containerd[1586]: 2026-03-02 12:56:35.126 [INFO][3912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-eth0" Mar 2 12:56:35.218637 containerd[1586]: 2026-03-02 12:56:35.145 [INFO][3912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lczbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e862cd4-019e-430f-a2e4-79712cc8a730", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee", Pod:"csi-node-driver-lczbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8475350a559", MAC:"36:a0:0c:6c:62:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:35.218637 containerd[1586]: 2026-03-02 12:56:35.170 [INFO][3912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee" Namespace="calico-system" Pod="csi-node-driver-lczbs" WorkloadEndpoint="localhost-k8s-csi--node--driver--lczbs-eth0" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.117 [INFO][3980] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.118 [INFO][3980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" iface="eth0" netns="/var/run/netns/cni-60c03c0d-8903-bb32-e0e5-cb7f2a5c863a" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.118 [INFO][3980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" iface="eth0" netns="/var/run/netns/cni-60c03c0d-8903-bb32-e0e5-cb7f2a5c863a" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.118 [INFO][3980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" iface="eth0" netns="/var/run/netns/cni-60c03c0d-8903-bb32-e0e5-cb7f2a5c863a" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.118 [INFO][3980] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.119 [INFO][3980] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.332 [INFO][4069] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.333 [INFO][4069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.333 [INFO][4069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.371 [WARNING][4069] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.371 [INFO][4069] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.392 [INFO][4069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.416766 containerd[1586]: 2026-03-02 12:56:35.404 [INFO][3980] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:35.428824 systemd[1]: run-netns-cni\x2d60c03c0d\x2d8903\x2dbb32\x2de0e5\x2dcb7f2a5c863a.mount: Deactivated successfully. Mar 2 12:56:35.434245 containerd[1586]: time="2026-03-02T12:56:35.433194950Z" level=info msg="TearDown network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\" successfully" Mar 2 12:56:35.434245 containerd[1586]: time="2026-03-02T12:56:35.434083319Z" level=info msg="StopPodSandbox for \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\" returns successfully" Mar 2 12:56:35.442774 containerd[1586]: time="2026-03-02T12:56:35.440627394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:35.442774 containerd[1586]: time="2026-03-02T12:56:35.441253563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:35.442774 containerd[1586]: time="2026-03-02T12:56:35.441289891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:35.442774 containerd[1586]: time="2026-03-02T12:56:35.441671653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.147 [INFO][3979] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.147 [INFO][3979] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" iface="eth0" netns="/var/run/netns/cni-62744cde-7fbe-32bc-d8dd-245551b0df07" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.147 [INFO][3979] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" iface="eth0" netns="/var/run/netns/cni-62744cde-7fbe-32bc-d8dd-245551b0df07" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.149 [INFO][3979] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" iface="eth0" netns="/var/run/netns/cni-62744cde-7fbe-32bc-d8dd-245551b0df07" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.149 [INFO][3979] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.149 [INFO][3979] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.412 [INFO][4075] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.412 [INFO][4075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.415 [INFO][4075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.454 [WARNING][4075] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.454 [INFO][4075] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.463 [INFO][4075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.486561 containerd[1586]: 2026-03-02 12:56:35.483 [INFO][3979] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:35.491502 containerd[1586]: time="2026-03-02T12:56:35.491003870Z" level=info msg="TearDown network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\" successfully" Mar 2 12:56:35.491614 containerd[1586]: time="2026-03-02T12:56:35.491583101Z" level=info msg="StopPodSandbox for \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\" returns successfully" Mar 2 12:56:35.496109 kubelet[2761]: E0302 12:56:35.495317 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:35.501538 kubelet[2761]: I0302 12:56:35.499378 2761 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-backend-key-pair\") pod \"f4839b9f-029a-4d28-b714-da8fd2fa861e\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " Mar 2 12:56:35.501609 containerd[1586]: time="2026-03-02T12:56:35.496119809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxcg5,Uid:57fb18fa-6d09-4965-b41b-6c5cac95f136,Namespace:kube-system,Attempt:1,}" Mar 2 12:56:35.501655 kubelet[2761]: I0302 12:56:35.501482 2761 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-nginx-config\") pod \"f4839b9f-029a-4d28-b714-da8fd2fa861e\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " Mar 2 12:56:35.502065 kubelet[2761]: I0302 12:56:35.501746 2761 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f4839b9f-029a-4d28-b714-da8fd2fa861e-kube-api-access-jldjj\") pod \"f4839b9f-029a-4d28-b714-da8fd2fa861e\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " Mar 2 12:56:35.502065 kubelet[2761]: I0302 12:56:35.501805 2761 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-ca-bundle\") pod \"f4839b9f-029a-4d28-b714-da8fd2fa861e\" (UID: \"f4839b9f-029a-4d28-b714-da8fd2fa861e\") " Mar 2 12:56:35.504530 kubelet[2761]: I0302 12:56:35.504279 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "f4839b9f-029a-4d28-b714-da8fd2fa861e" (UID: "f4839b9f-029a-4d28-b714-da8fd2fa861e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:56:35.505661 systemd[1]: run-netns-cni\x2d62744cde\x2d7fbe\x2d32bc\x2dd8dd\x2d245551b0df07.mount: Deactivated successfully. Mar 2 12:56:35.512971 kubelet[2761]: I0302 12:56:35.508227 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f4839b9f-029a-4d28-b714-da8fd2fa861e" (UID: "f4839b9f-029a-4d28-b714-da8fd2fa861e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:56:35.522627 kubelet[2761]: I0302 12:56:35.520453 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f4839b9f-029a-4d28-b714-da8fd2fa861e" (UID: "f4839b9f-029a-4d28-b714-da8fd2fa861e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.378 [INFO][4037] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.378 [INFO][4037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" iface="eth0" netns="/var/run/netns/cni-ce509b9c-2749-330c-fb05-765a7e042d08" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.379 [INFO][4037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" iface="eth0" netns="/var/run/netns/cni-ce509b9c-2749-330c-fb05-765a7e042d08" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.386 [INFO][4037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" iface="eth0" netns="/var/run/netns/cni-ce509b9c-2749-330c-fb05-765a7e042d08" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.386 [INFO][4037] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.386 [INFO][4037] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.445 [INFO][4134] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.446 [INFO][4134] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.463 [INFO][4134] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.494 [WARNING][4134] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.495 [INFO][4134] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.499 [INFO][4134] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.522805 containerd[1586]: 2026-03-02 12:56:35.514 [INFO][4037] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:35.522805 containerd[1586]: time="2026-03-02T12:56:35.522681344Z" level=info msg="TearDown network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\" successfully" Mar 2 12:56:35.522805 containerd[1586]: time="2026-03-02T12:56:35.522764539Z" level=info msg="StopPodSandbox for \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\" returns successfully" Mar 2 12:56:35.540117 kubelet[2761]: E0302 12:56:35.524788 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:35.540117 kubelet[2761]: I0302 12:56:35.532584 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4839b9f-029a-4d28-b714-da8fd2fa861e-kube-api-access-jldjj" (OuterVolumeSpecName: "kube-api-access-jldjj") pod "f4839b9f-029a-4d28-b714-da8fd2fa861e" (UID: "f4839b9f-029a-4d28-b714-da8fd2fa861e"). InnerVolumeSpecName "kube-api-access-jldjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 12:56:35.526923 systemd[1]: var-lib-kubelet-pods-f4839b9f\x2d029a\x2d4d28\x2db714\x2dda8fd2fa861e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 12:56:35.541565 containerd[1586]: time="2026-03-02T12:56:35.526759418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cv6q7,Uid:73b727e3-efa9-44aa-a2c1-a5653c8e04db,Namespace:kube-system,Attempt:1,}" Mar 2 12:56:35.542804 systemd[1]: run-netns-cni\x2dce509b9c\x2d2749\x2d330c\x2dfb05\x2d765a7e042d08.mount: Deactivated successfully. Mar 2 12:56:35.544200 systemd[1]: var-lib-kubelet-pods-f4839b9f\x2d029a\x2d4d28\x2db714\x2dda8fd2fa861e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djldjj.mount: Deactivated successfully. Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.213 [INFO][3978] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.215 [INFO][3978] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" iface="eth0" netns="/var/run/netns/cni-1e7e6512-4dc1-3192-e342-6c6fada840ba" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.217 [INFO][3978] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" iface="eth0" netns="/var/run/netns/cni-1e7e6512-4dc1-3192-e342-6c6fada840ba" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.229 [INFO][3978] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" iface="eth0" netns="/var/run/netns/cni-1e7e6512-4dc1-3192-e342-6c6fada840ba" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.229 [INFO][3978] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.229 [INFO][3978] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.475 [INFO][4091] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.475 [INFO][4091] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.499 [INFO][4091] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.532 [WARNING][4091] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.532 [INFO][4091] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.538 [INFO][4091] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.553289 containerd[1586]: 2026-03-02 12:56:35.545 [INFO][3978] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:35.556539 containerd[1586]: time="2026-03-02T12:56:35.556504590Z" level=info msg="TearDown network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\" successfully" Mar 2 12:56:35.556620 containerd[1586]: time="2026-03-02T12:56:35.556605448Z" level=info msg="StopPodSandbox for \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\" returns successfully" Mar 2 12:56:35.559629 containerd[1586]: time="2026-03-02T12:56:35.559557176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-7z5vf,Uid:5a848f00-f6ec-4385-a50f-239a27273d12,Namespace:calico-system,Attempt:1,}" Mar 2 12:56:35.577252 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.345 [INFO][4055] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.345 [INFO][4055] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" iface="eth0" netns="/var/run/netns/cni-b06ca1c0-3160-9490-c159-361f53558280" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.346 [INFO][4055] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" iface="eth0" netns="/var/run/netns/cni-b06ca1c0-3160-9490-c159-361f53558280" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.353 [INFO][4055] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" iface="eth0" netns="/var/run/netns/cni-b06ca1c0-3160-9490-c159-361f53558280" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.353 [INFO][4055] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.354 [INFO][4055] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.551 [INFO][4118] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.554 [INFO][4118] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.555 [INFO][4118] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.573 [WARNING][4118] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.573 [INFO][4118] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.578 [INFO][4118] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.594320 containerd[1586]: 2026-03-02 12:56:35.583 [INFO][4055] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:35.601165 containerd[1586]: time="2026-03-02T12:56:35.598540425Z" level=info msg="TearDown network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\" successfully" Mar 2 12:56:35.601165 containerd[1586]: time="2026-03-02T12:56:35.601142073Z" level=info msg="StopPodSandbox for \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\" returns successfully" Mar 2 12:56:35.602212 kubelet[2761]: I0302 12:56:35.602179 2761 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 2 12:56:35.602282 kubelet[2761]: I0302 12:56:35.602215 2761 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4839b9f-029a-4d28-b714-da8fd2fa861e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 2 12:56:35.602282 kubelet[2761]: I0302 12:56:35.602229 2761 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f4839b9f-029a-4d28-b714-da8fd2fa861e-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 2 12:56:35.602282 kubelet[2761]: I0302 12:56:35.602243 2761 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f4839b9f-029a-4d28-b714-da8fd2fa861e-kube-api-access-jldjj\") on node \"localhost\" DevicePath \"\"" Mar 2 12:56:35.603641 containerd[1586]: time="2026-03-02T12:56:35.603580746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-kzgtl,Uid:3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e,Namespace:calico-system,Attempt:1,}" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.402 [INFO][3981] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.402 [INFO][3981] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" iface="eth0" netns="/var/run/netns/cni-b2fea1f0-20a1-2e4c-13f4-e4229e30678a" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.403 [INFO][3981] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" iface="eth0" netns="/var/run/netns/cni-b2fea1f0-20a1-2e4c-13f4-e4229e30678a" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.404 [INFO][3981] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" iface="eth0" netns="/var/run/netns/cni-b2fea1f0-20a1-2e4c-13f4-e4229e30678a" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.404 [INFO][3981] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.404 [INFO][3981] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.569 [INFO][4142] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.574 [INFO][4142] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.578 [INFO][4142] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.591 [WARNING][4142] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.592 [INFO][4142] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.597 [INFO][4142] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.650502 containerd[1586]: 2026-03-02 12:56:35.608 [INFO][3981] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:35.653672 containerd[1586]: time="2026-03-02T12:56:35.653023245Z" level=info msg="TearDown network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\" successfully" Mar 2 12:56:35.654096 containerd[1586]: time="2026-03-02T12:56:35.653680722Z" level=info msg="StopPodSandbox for \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\" returns successfully" Mar 2 12:56:35.656779 containerd[1586]: time="2026-03-02T12:56:35.656542237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-rdrh7,Uid:c27ea198-031e-421a-9756-e262b0869b53,Namespace:calico-system,Attempt:1,}" Mar 2 12:56:35.659986 containerd[1586]: time="2026-03-02T12:56:35.659631595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lczbs,Uid:7e862cd4-019e-430f-a2e4-79712cc8a730,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee\"" Mar 2 12:56:35.672327 containerd[1586]: time="2026-03-02T12:56:35.671907975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.491 [INFO][4029] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.491 [INFO][4029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" iface="eth0" netns="/var/run/netns/cni-d19c4c8d-873a-3023-2d1b-b7560d42a89b" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.493 [INFO][4029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" iface="eth0" netns="/var/run/netns/cni-d19c4c8d-873a-3023-2d1b-b7560d42a89b" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.494 [INFO][4029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" iface="eth0" netns="/var/run/netns/cni-d19c4c8d-873a-3023-2d1b-b7560d42a89b" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.503 [INFO][4029] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.503 [INFO][4029] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.646 [INFO][4171] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.646 [INFO][4171] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.646 [INFO][4171] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.667 [WARNING][4171] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.667 [INFO][4171] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.684 [INFO][4171] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:35.697088 containerd[1586]: 2026-03-02 12:56:35.690 [INFO][4029] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:35.698288 containerd[1586]: time="2026-03-02T12:56:35.697609537Z" level=info msg="TearDown network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\" successfully" Mar 2 12:56:35.698288 containerd[1586]: time="2026-03-02T12:56:35.697644322Z" level=info msg="StopPodSandbox for \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\" returns successfully" Mar 2 12:56:35.700147 containerd[1586]: time="2026-03-02T12:56:35.700061733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5c78c78-476fj,Uid:20530e0e-7523-46f2-bf7b-30bc40bef15b,Namespace:calico-system,Attempt:1,}" Mar 2 12:56:36.325930 kubelet[2761]: I0302 12:56:36.325577 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e0a310d-a606-49d4-8ff8-4753a9ebab22-whisker-ca-bundle\") pod \"whisker-7c55479bf8-7blqb\" (UID: \"2e0a310d-a606-49d4-8ff8-4753a9ebab22\") " pod="calico-system/whisker-7c55479bf8-7blqb" Mar 2 12:56:36.325930 kubelet[2761]: I0302 12:56:36.325643 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2e0a310d-a606-49d4-8ff8-4753a9ebab22-nginx-config\") pod \"whisker-7c55479bf8-7blqb\" (UID: \"2e0a310d-a606-49d4-8ff8-4753a9ebab22\") " pod="calico-system/whisker-7c55479bf8-7blqb" Mar 2 12:56:36.325930 kubelet[2761]: I0302 12:56:36.325666 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e0a310d-a606-49d4-8ff8-4753a9ebab22-whisker-backend-key-pair\") pod \"whisker-7c55479bf8-7blqb\" (UID: \"2e0a310d-a606-49d4-8ff8-4753a9ebab22\") " pod="calico-system/whisker-7c55479bf8-7blqb" Mar 2 12:56:36.325930 kubelet[2761]: I0302 12:56:36.325681 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2b2h\" (UniqueName: \"kubernetes.io/projected/2e0a310d-a606-49d4-8ff8-4753a9ebab22-kube-api-access-v2b2h\") pod \"whisker-7c55479bf8-7blqb\" (UID: \"2e0a310d-a606-49d4-8ff8-4753a9ebab22\") " pod="calico-system/whisker-7c55479bf8-7blqb" Mar 2 12:56:36.365705 systemd-networkd[1244]: cali2b7bdfc1d63: Link UP Mar 2 12:56:36.368609 systemd-networkd[1244]: cali2b7bdfc1d63: Gained carrier Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:35.638 [ERROR][4194] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:35.671 [INFO][4194] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0 coredns-674b8bbfcf- kube-system 73b727e3-efa9-44aa-a2c1-a5653c8e04db 920 0 2026-03-02 12:56:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cv6q7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b7bdfc1d63 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:35.671 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:35.930 [INFO][4235] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" HandleID="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.055 [INFO][4235] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" HandleID="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001f6e20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cv6q7", "timestamp":"2026-03-02 12:56:35.930123226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002162c0)} Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.055 [INFO][4235] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.055 [INFO][4235] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.055 [INFO][4235] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.062 [INFO][4235] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.081 [INFO][4235] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.123 [INFO][4235] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.130 [INFO][4235] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.138 [INFO][4235] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.138 [INFO][4235] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.143 [INFO][4235] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49 Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.219 [INFO][4235] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.231 [INFO][4235] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.231 [INFO][4235] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" host="localhost" Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.231 [INFO][4235] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:36.416809 containerd[1586]: 2026-03-02 12:56:36.231 [INFO][4235] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" HandleID="k8s-pod-network.a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:36.419640 containerd[1586]: 2026-03-02 12:56:36.347 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73b727e3-efa9-44aa-a2c1-a5653c8e04db", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cv6q7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b7bdfc1d63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:36.419640 containerd[1586]: 2026-03-02 12:56:36.347 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:36.419640 containerd[1586]: 2026-03-02 12:56:36.347 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b7bdfc1d63 ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:36.419640 containerd[1586]: 2026-03-02 12:56:36.375 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:36.419640 containerd[1586]: 2026-03-02 12:56:36.376 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73b727e3-efa9-44aa-a2c1-a5653c8e04db", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49", Pod:"coredns-674b8bbfcf-cv6q7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b7bdfc1d63", MAC:"4a:e7:40:a3:cd:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:36.419640 containerd[1586]: 2026-03-02 12:56:36.403 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49" Namespace="kube-system" Pod="coredns-674b8bbfcf-cv6q7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:36.464698 systemd-networkd[1244]: cali8475350a559: Gained IPv6LL Mar 2 12:56:36.474062 systemd[1]: run-netns-cni\x2d1e7e6512\x2d4dc1\x2d3192\x2de342\x2d6c6fada840ba.mount: Deactivated successfully. Mar 2 12:56:36.474349 systemd[1]: run-netns-cni\x2db2fea1f0\x2d20a1\x2d2e4c\x2d13f4\x2de4229e30678a.mount: Deactivated successfully. Mar 2 12:56:36.474615 systemd[1]: run-netns-cni\x2db06ca1c0\x2d3160\x2d9490\x2dc159\x2d361f53558280.mount: Deactivated successfully. Mar 2 12:56:36.474786 systemd[1]: run-netns-cni\x2dd19c4c8d\x2d873a\x2d3023\x2d2d1b\x2db7560d42a89b.mount: Deactivated successfully. Mar 2 12:56:36.615322 containerd[1586]: time="2026-03-02T12:56:36.612367593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c55479bf8-7blqb,Uid:2e0a310d-a606-49d4-8ff8-4753a9ebab22,Namespace:calico-system,Attempt:0,}" Mar 2 12:56:36.762531 kubelet[2761]: I0302 12:56:36.761952 2761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4839b9f-029a-4d28-b714-da8fd2fa861e" path="/var/lib/kubelet/pods/f4839b9f-029a-4d28-b714-da8fd2fa861e/volumes" Mar 2 12:56:36.837806 systemd-networkd[1244]: cali31a914c9b81: Link UP Mar 2 12:56:36.839509 systemd-networkd[1244]: cali31a914c9b81: Gained carrier Mar 2 12:56:36.840951 containerd[1586]: time="2026-03-02T12:56:36.834597698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:36.840951 containerd[1586]: time="2026-03-02T12:56:36.835120524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:36.840951 containerd[1586]: time="2026-03-02T12:56:36.835138748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:36.840951 containerd[1586]: time="2026-03-02T12:56:36.835603696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:35.703 [ERROR][4188] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:35.802 [INFO][4188] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0 coredns-674b8bbfcf- kube-system 57fb18fa-6d09-4965-b41b-6c5cac95f136 913 0 2026-03-02 12:56:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-sxcg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali31a914c9b81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:35.804 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.493 [INFO][4305] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" HandleID="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.574 [INFO][4305] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" HandleID="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000782440), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-sxcg5", "timestamp":"2026-03-02 12:56:36.493639905 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000251760)} Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.574 [INFO][4305] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.574 [INFO][4305] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.574 [INFO][4305] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.582 [INFO][4305] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.596 [INFO][4305] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.613 [INFO][4305] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.619 [INFO][4305] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.626 [INFO][4305] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.628 [INFO][4305] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.656 [INFO][4305] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53 Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.686 [INFO][4305] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.701 [INFO][4305] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.701 [INFO][4305] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" host="localhost" Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.701 [INFO][4305] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:36.971453 containerd[1586]: 2026-03-02 12:56:36.701 [INFO][4305] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" HandleID="k8s-pod-network.ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:36.974179 containerd[1586]: 2026-03-02 12:56:36.737 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"57fb18fa-6d09-4965-b41b-6c5cac95f136", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-sxcg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a914c9b81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:36.974179 containerd[1586]: 2026-03-02 12:56:36.749 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:36.974179 containerd[1586]: 2026-03-02 12:56:36.758 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31a914c9b81 ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:36.974179 containerd[1586]: 2026-03-02 12:56:36.840 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:36.974179 containerd[1586]: 2026-03-02 12:56:36.891 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"57fb18fa-6d09-4965-b41b-6c5cac95f136", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53", Pod:"coredns-674b8bbfcf-sxcg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a914c9b81", MAC:"ae:f7:17:b2:d4:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:36.974179 containerd[1586]: 2026-03-02 12:56:36.957 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxcg5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:36.994730 systemd-networkd[1244]: cali37e29ad0ada: Link UP Mar 2 12:56:36.995274 systemd-networkd[1244]: cali37e29ad0ada: Gained carrier Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.260 [ERROR][4267] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.353 [INFO][4267] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0 calico-kube-controllers-6c5c78c78- calico-system 20530e0e-7523-46f2-bf7b-30bc40bef15b 923 0 2026-03-02 12:56:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c5c78c78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c5c78c78-476fj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali37e29ad0ada [] [] }} ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.353 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.562 [INFO][4432] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" HandleID="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.582 [INFO][4432] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" HandleID="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c5c78c78-476fj", "timestamp":"2026-03-02 12:56:36.562330733 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00069c2c0)} Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.582 [INFO][4432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.702 [INFO][4432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.702 [INFO][4432] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.720 [INFO][4432] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.771 [INFO][4432] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.796 [INFO][4432] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.816 [INFO][4432] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.826 [INFO][4432] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.827 [INFO][4432] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.832 [INFO][4432] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799 Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.841 [INFO][4432] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.902 [INFO][4432] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.902 [INFO][4432] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" host="localhost" Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.903 [INFO][4432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:37.050943 containerd[1586]: 2026-03-02 12:56:36.903 [INFO][4432] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" HandleID="k8s-pod-network.162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:37.053957 containerd[1586]: 2026-03-02 12:56:36.968 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0", GenerateName:"calico-kube-controllers-6c5c78c78-", Namespace:"calico-system", SelfLink:"", UID:"20530e0e-7523-46f2-bf7b-30bc40bef15b", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5c78c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c5c78c78-476fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37e29ad0ada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.053957 containerd[1586]: 2026-03-02 12:56:36.976 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:37.053957 containerd[1586]: 2026-03-02 12:56:36.976 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37e29ad0ada ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:37.053957 containerd[1586]: 2026-03-02 12:56:37.001 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:37.053957 containerd[1586]: 2026-03-02 12:56:37.004 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0", GenerateName:"calico-kube-controllers-6c5c78c78-", Namespace:"calico-system", SelfLink:"", UID:"20530e0e-7523-46f2-bf7b-30bc40bef15b", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5c78c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799", Pod:"calico-kube-controllers-6c5c78c78-476fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37e29ad0ada", MAC:"b6:39:b0:4e:b0:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.053957 containerd[1586]: 2026-03-02 12:56:37.029 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799" Namespace="calico-system" Pod="calico-kube-controllers-6c5c78c78-476fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:37.153714 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:37.170976 systemd-networkd[1244]: cali4a9004f7a01: Link UP Mar 2 12:56:37.181352 systemd-networkd[1244]: cali4a9004f7a01: Gained carrier Mar 2 12:56:37.223905 containerd[1586]: time="2026-03-02T12:56:37.217914914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:37.223905 containerd[1586]: time="2026-03-02T12:56:37.217988241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:37.223905 containerd[1586]: time="2026-03-02T12:56:37.218008409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.223905 containerd[1586]: time="2026-03-02T12:56:37.218122442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:35.914 [ERROR][4243] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.012 [INFO][4243] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0 calico-apiserver-677b948c89- calico-system 3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e 919 0 2026-03-02 12:56:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677b948c89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-677b948c89-kzgtl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4a9004f7a01 [] [] }} ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.012 [INFO][4243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.657 [INFO][4381] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" HandleID="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.759 [INFO][4381] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" HandleID="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001399c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-677b948c89-kzgtl", "timestamp":"2026-03-02 12:56:36.657017616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00057e000)} Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.759 [INFO][4381] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.919 [INFO][4381] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.962 [INFO][4381] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:36.983 [INFO][4381] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.029 [INFO][4381] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.038 [INFO][4381] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.042 [INFO][4381] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.047 [INFO][4381] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.048 [INFO][4381] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.050 [INFO][4381] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9 Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.060 [INFO][4381] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.121 [INFO][4381] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.124 [INFO][4381] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" host="localhost" Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.124 [INFO][4381] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:37.235008 containerd[1586]: 2026-03-02 12:56:37.126 [INFO][4381] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" HandleID="k8s-pod-network.5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:37.236137 containerd[1586]: 2026-03-02 12:56:37.146 [INFO][4243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-677b948c89-kzgtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a9004f7a01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.236137 containerd[1586]: 2026-03-02 12:56:37.151 [INFO][4243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:37.236137 containerd[1586]: 2026-03-02 12:56:37.151 [INFO][4243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a9004f7a01 ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:37.236137 containerd[1586]: 2026-03-02 12:56:37.190 [INFO][4243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:37.236137 containerd[1586]: 2026-03-02 12:56:37.198 [INFO][4243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9", Pod:"calico-apiserver-677b948c89-kzgtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a9004f7a01", MAC:"12:27:ad:1c:9c:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.236137 containerd[1586]: 2026-03-02 12:56:37.220 [INFO][4243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9" Namespace="calico-system" Pod="calico-apiserver-677b948c89-kzgtl" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:37.318469 kernel: calico-node[4372]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 12:56:37.355721 systemd-networkd[1244]: cali5933164bc83: Link UP Mar 2 12:56:37.364606 systemd-networkd[1244]: cali5933164bc83: Gained carrier Mar 2 12:56:37.398286 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:37.400635 containerd[1586]: time="2026-03-02T12:56:37.400002197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cv6q7,Uid:73b727e3-efa9-44aa-a2c1-a5653c8e04db,Namespace:kube-system,Attempt:1,} returns sandbox id \"a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49\"" Mar 2 12:56:37.406791 kubelet[2761]: E0302 12:56:37.405189 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.517060 containerd[1586]: time="2026-03-02T12:56:37.511503745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:37.517060 containerd[1586]: time="2026-03-02T12:56:37.513243935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:37.517060 containerd[1586]: time="2026-03-02T12:56:37.513298577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.522745 containerd[1586]: time="2026-03-02T12:56:37.518973082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:35.887 [ERROR][4223] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:35.942 [INFO][4223] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0 calico-apiserver-677b948c89- calico-system 5a848f00-f6ec-4385-a50f-239a27273d12 915 0 2026-03-02 12:56:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677b948c89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-677b948c89-7z5vf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5933164bc83 [] [] }} ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:35.950 [INFO][4223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:36.693 [INFO][4395] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" HandleID="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:36.763 [INFO][4395] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" HandleID="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b9040), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-677b948c89-7z5vf", "timestamp":"2026-03-02 12:56:36.69361774 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004e2000)} Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:36.770 [INFO][4395] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.124 [INFO][4395] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.125 [INFO][4395] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.146 [INFO][4395] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.159 [INFO][4395] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.190 [INFO][4395] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.202 [INFO][4395] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.211 [INFO][4395] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.216 [INFO][4395] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.233 [INFO][4395] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13 Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.242 [INFO][4395] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.254 [INFO][4395] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.254 [INFO][4395] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" host="localhost" Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.254 [INFO][4395] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:37.526491 containerd[1586]: 2026-03-02 12:56:37.255 [INFO][4395] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" HandleID="k8s-pod-network.46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:37.527233 containerd[1586]: 2026-03-02 12:56:37.289 [INFO][4223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"5a848f00-f6ec-4385-a50f-239a27273d12", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-677b948c89-7z5vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5933164bc83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.527233 containerd[1586]: 2026-03-02 12:56:37.293 [INFO][4223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:37.527233 containerd[1586]: 2026-03-02 12:56:37.294 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5933164bc83 ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:37.527233 containerd[1586]: 2026-03-02 12:56:37.374 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:37.527233 containerd[1586]: 2026-03-02 12:56:37.385 [INFO][4223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"5a848f00-f6ec-4385-a50f-239a27273d12", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13", Pod:"calico-apiserver-677b948c89-7z5vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5933164bc83", MAC:"b2:d3:b0:71:41:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.527233 containerd[1586]: 2026-03-02 12:56:37.489 [INFO][4223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13" Namespace="calico-system" Pod="calico-apiserver-677b948c89-7z5vf" WorkloadEndpoint="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:37.530520 containerd[1586]: time="2026-03-02T12:56:37.530488313Z" level=info msg="CreateContainer within sandbox \"a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:56:37.632272 containerd[1586]: time="2026-03-02T12:56:37.619043978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:37.632272 containerd[1586]: time="2026-03-02T12:56:37.619167879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:37.632272 containerd[1586]: time="2026-03-02T12:56:37.619187977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.632272 containerd[1586]: time="2026-03-02T12:56:37.619636124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.710706 systemd-networkd[1244]: cali7e35b2e2e78: Link UP Mar 2 12:56:37.721723 systemd-networkd[1244]: cali7e35b2e2e78: Gained carrier Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:36.052 [ERROR][4248] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:36.123 [INFO][4248] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9566f57b5--rdrh7-eth0 goldmane-9566f57b5- calico-system c27ea198-031e-421a-9756-e262b0869b53 921 0 2026-03-02 12:56:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9566f57b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9566f57b5-rdrh7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7e35b2e2e78 [] [] }} ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:36.123 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:36.750 [INFO][4406] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" HandleID="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:36.832 [INFO][4406] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" HandleID="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000350430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9566f57b5-rdrh7", "timestamp":"2026-03-02 12:56:36.750752817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000202160)} Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:36.832 [INFO][4406] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.257 [INFO][4406] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.282 [INFO][4406] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.304 [INFO][4406] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.327 [INFO][4406] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.337 [INFO][4406] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.341 [INFO][4406] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.379 [INFO][4406] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.379 [INFO][4406] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.384 [INFO][4406] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.404 [INFO][4406] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.486 [INFO][4406] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.487 [INFO][4406] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" host="localhost" Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.488 [INFO][4406] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:37.769680 containerd[1586]: 2026-03-02 12:56:37.488 [INFO][4406] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" HandleID="k8s-pod-network.84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:37.770700 containerd[1586]: 2026-03-02 12:56:37.534 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--rdrh7-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"c27ea198-031e-421a-9756-e262b0869b53", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9566f57b5-rdrh7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e35b2e2e78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.770700 containerd[1586]: 2026-03-02 12:56:37.534 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:37.770700 containerd[1586]: 2026-03-02 12:56:37.534 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e35b2e2e78 ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:37.770700 containerd[1586]: 2026-03-02 12:56:37.723 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:37.770700 containerd[1586]: 2026-03-02 12:56:37.723 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--rdrh7-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"c27ea198-031e-421a-9756-e262b0869b53", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a", Pod:"goldmane-9566f57b5-rdrh7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e35b2e2e78", MAC:"fe:e0:02:86:4e:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:37.770700 containerd[1586]: 2026-03-02 12:56:37.757 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a" Namespace="calico-system" Pod="goldmane-9566f57b5-rdrh7" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:37.794524 containerd[1586]: time="2026-03-02T12:56:37.794235079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxcg5,Uid:57fb18fa-6d09-4965-b41b-6c5cac95f136,Namespace:kube-system,Attempt:1,} returns sandbox id \"ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53\"" Mar 2 12:56:37.797787 kubelet[2761]: E0302 12:56:37.797687 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.846365 containerd[1586]: time="2026-03-02T12:56:37.846247997Z" level=info msg="CreateContainer within sandbox \"ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:56:37.856912 containerd[1586]: time="2026-03-02T12:56:37.855178739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:37.856912 containerd[1586]: time="2026-03-02T12:56:37.855252787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:37.856912 containerd[1586]: time="2026-03-02T12:56:37.855268317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.856912 containerd[1586]: time="2026-03-02T12:56:37.856533690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.936237 systemd-networkd[1244]: calid7d1b079c84: Link UP Mar 2 12:56:37.936897 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:37.937813 systemd-networkd[1244]: calid7d1b079c84: Gained carrier Mar 2 12:56:37.948475 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:37.986328 containerd[1586]: time="2026-03-02T12:56:37.985508087Z" level=info msg="CreateContainer within sandbox \"a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"044d0d43ecefa4365b480eabee353051279260cdcad14ba248cc50e46f9b0eae\"" Mar 2 12:56:37.990068 containerd[1586]: time="2026-03-02T12:56:37.989251872Z" level=info msg="StartContainer for \"044d0d43ecefa4365b480eabee353051279260cdcad14ba248cc50e46f9b0eae\"" Mar 2 12:56:37.994129 containerd[1586]: time="2026-03-02T12:56:37.994096729Z" level=info msg="CreateContainer within sandbox \"ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c2cc289c84de12f23f9cbf67f2325f8274e8bbc2f2dab9ed1a965adf3a2f21a\"" Mar 2 12:56:37.997831 systemd-networkd[1244]: cali2b7bdfc1d63: Gained IPv6LL Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.161 [INFO][4511] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7c55479bf8--7blqb-eth0 whisker-7c55479bf8- calico-system 2e0a310d-a606-49d4-8ff8-4753a9ebab22 942 0 2026-03-02 12:56:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c55479bf8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7c55479bf8-7blqb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid7d1b079c84 [] [] }} ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.161 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.636 [INFO][4588] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" HandleID="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Workload="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.670 [INFO][4588] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" HandleID="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Workload="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003826c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7c55479bf8-7blqb", "timestamp":"2026-03-02 12:56:37.636719031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000205600)} Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.697 [INFO][4588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.697 [INFO][4588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.697 [INFO][4588] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.716 [INFO][4588] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.765 [INFO][4588] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.780 [INFO][4588] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.785 [INFO][4588] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.795 [INFO][4588] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.796 [INFO][4588] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.804 [INFO][4588] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.869 [INFO][4588] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.883 [INFO][4588] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.883 [INFO][4588] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" host="localhost" Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.883 [INFO][4588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:37.999585 containerd[1586]: 2026-03-02 12:56:37.883 [INFO][4588] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" HandleID="k8s-pod-network.b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Workload="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" Mar 2 12:56:38.000347 containerd[1586]: 2026-03-02 12:56:37.894 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c55479bf8--7blqb-eth0", GenerateName:"whisker-7c55479bf8-", Namespace:"calico-system", SelfLink:"", UID:"2e0a310d-a606-49d4-8ff8-4753a9ebab22", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c55479bf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7c55479bf8-7blqb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7d1b079c84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:38.000347 containerd[1586]: 2026-03-02 12:56:37.894 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" Mar 2 12:56:38.000347 containerd[1586]: 2026-03-02 12:56:37.894 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7d1b079c84 ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" Mar 2 12:56:38.000347 containerd[1586]: 2026-03-02 12:56:37.939 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" Mar 2 12:56:38.000347 containerd[1586]: 2026-03-02 12:56:37.952 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c55479bf8--7blqb-eth0", GenerateName:"whisker-7c55479bf8-", Namespace:"calico-system", SelfLink:"", UID:"2e0a310d-a606-49d4-8ff8-4753a9ebab22", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c55479bf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b", Pod:"whisker-7c55479bf8-7blqb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7d1b079c84", MAC:"26:e1:a2:75:08:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:38.000347 containerd[1586]: 2026-03-02 12:56:37.982 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b" Namespace="calico-system" Pod="whisker-7c55479bf8-7blqb" WorkloadEndpoint="localhost-k8s-whisker--7c55479bf8--7blqb-eth0" Mar 2 12:56:38.002918 containerd[1586]: time="2026-03-02T12:56:38.001996599Z" level=info msg="StartContainer for \"0c2cc289c84de12f23f9cbf67f2325f8274e8bbc2f2dab9ed1a965adf3a2f21a\"" Mar 2 12:56:38.102088 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:38.120733 containerd[1586]: time="2026-03-02T12:56:38.120597145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5c78c78-476fj,Uid:20530e0e-7523-46f2-bf7b-30bc40bef15b,Namespace:calico-system,Attempt:1,} returns sandbox id \"162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799\"" Mar 2 12:56:38.125186 containerd[1586]: time="2026-03-02T12:56:38.094790133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:38.125186 containerd[1586]: time="2026-03-02T12:56:38.094934383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:38.125186 containerd[1586]: time="2026-03-02T12:56:38.094960642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:38.125186 containerd[1586]: time="2026-03-02T12:56:38.095113468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:38.131655 containerd[1586]: time="2026-03-02T12:56:38.131584831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-kzgtl,Uid:3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9\"" Mar 2 12:56:38.190771 containerd[1586]: time="2026-03-02T12:56:38.178896684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:38.190771 containerd[1586]: time="2026-03-02T12:56:38.179011148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:38.190771 containerd[1586]: time="2026-03-02T12:56:38.179034571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:38.190771 containerd[1586]: time="2026-03-02T12:56:38.179191725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:38.272446 containerd[1586]: time="2026-03-02T12:56:38.272078471Z" level=info msg="StartContainer for \"044d0d43ecefa4365b480eabee353051279260cdcad14ba248cc50e46f9b0eae\" returns successfully" Mar 2 12:56:38.287933 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:38.297579 containerd[1586]: time="2026-03-02T12:56:38.297476073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677b948c89-7z5vf,Uid:5a848f00-f6ec-4385-a50f-239a27273d12,Namespace:calico-system,Attempt:1,} returns sandbox id \"46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13\"" Mar 2 12:56:38.302977 containerd[1586]: time="2026-03-02T12:56:38.302827588Z" level=info msg="StartContainer for \"0c2cc289c84de12f23f9cbf67f2325f8274e8bbc2f2dab9ed1a965adf3a2f21a\" returns successfully" Mar 2 12:56:38.315924 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:56:38.398142 containerd[1586]: time="2026-03-02T12:56:38.397698303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-rdrh7,Uid:c27ea198-031e-421a-9756-e262b0869b53,Namespace:calico-system,Attempt:1,} returns sandbox id \"84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a\"" Mar 2 12:56:38.419342 containerd[1586]: time="2026-03-02T12:56:38.419275108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c55479bf8-7blqb,Uid:2e0a310d-a606-49d4-8ff8-4753a9ebab22,Namespace:calico-system,Attempt:0,} returns sandbox id \"b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b\"" Mar 2 12:56:38.457090 systemd[1]: run-containerd-runc-k8s.io-162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799-runc.VOR0zH.mount: Deactivated successfully. Mar 2 12:56:38.571219 containerd[1586]: time="2026-03-02T12:56:38.567379010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:38.572673 containerd[1586]: time="2026-03-02T12:56:38.572066008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8793087" Mar 2 12:56:38.572673 containerd[1586]: time="2026-03-02T12:56:38.572150175Z" level=info msg="ImageCreate event name:\"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:38.583809 containerd[1586]: time="2026-03-02T12:56:38.582656669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"10349132\" in 2.910660801s" Mar 2 12:56:38.583809 containerd[1586]: time="2026-03-02T12:56:38.582710389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\"" Mar 2 12:56:38.585527 containerd[1586]: time="2026-03-02T12:56:38.584151149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:38.590215 containerd[1586]: time="2026-03-02T12:56:38.590011057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 12:56:38.599158 containerd[1586]: time="2026-03-02T12:56:38.599030545Z" level=info msg="CreateContainer within sandbox \"0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 12:56:38.639259 containerd[1586]: time="2026-03-02T12:56:38.639054736Z" level=info msg="CreateContainer within sandbox \"0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"44ff3f1371ae47572fde89d6fb8916580e6797f4a11038c638db134b4fc094b8\"" Mar 2 12:56:38.641369 containerd[1586]: time="2026-03-02T12:56:38.641286873Z" level=info msg="StartContainer for \"44ff3f1371ae47572fde89d6fb8916580e6797f4a11038c638db134b4fc094b8\"" Mar 2 12:56:38.771927 systemd-networkd[1244]: cali5933164bc83: Gained IPv6LL Mar 2 12:56:38.772241 systemd-networkd[1244]: vxlan.calico: Link UP Mar 2 12:56:38.772249 systemd-networkd[1244]: vxlan.calico: Gained carrier Mar 2 12:56:38.829631 systemd-networkd[1244]: cali31a914c9b81: Gained IPv6LL Mar 2 12:56:38.837097 containerd[1586]: time="2026-03-02T12:56:38.836785217Z" level=info msg="StartContainer for \"44ff3f1371ae47572fde89d6fb8916580e6797f4a11038c638db134b4fc094b8\" returns successfully" Mar 2 12:56:38.897955 systemd-journald[1170]: Under memory pressure, flushing caches. Mar 2 12:56:38.892649 systemd-resolved[1463]: Under memory pressure, flushing caches. Mar 2 12:56:38.892729 systemd-resolved[1463]: Flushed all caches. Mar 2 12:56:38.894302 systemd-networkd[1244]: cali37e29ad0ada: Gained IPv6LL Mar 2 12:56:39.020998 systemd-networkd[1244]: calid7d1b079c84: Gained IPv6LL Mar 2 12:56:39.084817 systemd-networkd[1244]: cali4a9004f7a01: Gained IPv6LL Mar 2 12:56:39.232702 kubelet[2761]: E0302 12:56:39.232511 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:39.263199 kubelet[2761]: I0302 12:56:39.260103 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sxcg5" podStartSLOduration=37.260079088 podStartE2EDuration="37.260079088s" podCreationTimestamp="2026-03-02 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:39.256093062 +0000 UTC m=+42.681809093" watchObservedRunningTime="2026-03-02 12:56:39.260079088 +0000 UTC m=+42.685795118" Mar 2 12:56:39.278446 kubelet[2761]: E0302 12:56:39.278271 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:39.789223 systemd-networkd[1244]: cali7e35b2e2e78: Gained IPv6LL Mar 2 12:56:40.280158 kubelet[2761]: E0302 12:56:40.279991 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:40.280158 kubelet[2761]: E0302 12:56:40.280006 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:40.622223 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Mar 2 12:56:40.911260 containerd[1586]: time="2026-03-02T12:56:40.911018834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:40.912464 containerd[1586]: time="2026-03-02T12:56:40.912368471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=52396348" Mar 2 12:56:40.914310 containerd[1586]: time="2026-03-02T12:56:40.914229774Z" level=info msg="ImageCreate event name:\"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:40.918627 containerd[1586]: time="2026-03-02T12:56:40.918560725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:40.920315 containerd[1586]: time="2026-03-02T12:56:40.920247462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"53952361\" in 2.329922119s" Mar 2 12:56:40.920482 containerd[1586]: time="2026-03-02T12:56:40.920319045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\"" Mar 2 12:56:40.922355 containerd[1586]: time="2026-03-02T12:56:40.922098383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:56:40.940001 containerd[1586]: time="2026-03-02T12:56:40.939936242Z" level=info msg="CreateContainer within sandbox \"162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 12:56:40.961242 containerd[1586]: time="2026-03-02T12:56:40.961152114Z" level=info msg="CreateContainer within sandbox \"162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"997780f38041d57e3430a3c086a92a9e14a0bdab24b0597864a31f2ec14fa435\"" Mar 2 12:56:40.962341 containerd[1586]: time="2026-03-02T12:56:40.962290711Z" level=info msg="StartContainer for \"997780f38041d57e3430a3c086a92a9e14a0bdab24b0597864a31f2ec14fa435\"" Mar 2 12:56:41.083568 containerd[1586]: time="2026-03-02T12:56:41.083518937Z" level=info msg="StartContainer for \"997780f38041d57e3430a3c086a92a9e14a0bdab24b0597864a31f2ec14fa435\" returns successfully" Mar 2 12:56:41.286964 kubelet[2761]: E0302 12:56:41.286583 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:41.290059 kubelet[2761]: E0302 12:56:41.288791 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:41.319085 kubelet[2761]: I0302 12:56:41.318989 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cv6q7" podStartSLOduration=39.318970421 podStartE2EDuration="39.318970421s" podCreationTimestamp="2026-03-02 12:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:39.326580399 +0000 UTC m=+42.752296428" watchObservedRunningTime="2026-03-02 12:56:41.318970421 +0000 UTC m=+44.744686451" Mar 2 12:56:41.403479 kubelet[2761]: I0302 12:56:41.401278 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c5c78c78-476fj" podStartSLOduration=19.612337217 podStartE2EDuration="22.401253526s" podCreationTimestamp="2026-03-02 12:56:19 +0000 UTC" firstStartedPulling="2026-03-02 12:56:38.132948242 +0000 UTC m=+41.558664272" lastFinishedPulling="2026-03-02 12:56:40.92186455 +0000 UTC m=+44.347580581" observedRunningTime="2026-03-02 12:56:41.320572997 +0000 UTC m=+44.746289037" watchObservedRunningTime="2026-03-02 12:56:41.401253526 +0000 UTC m=+44.826969556" Mar 2 12:56:43.252315 containerd[1586]: time="2026-03-02T12:56:43.251590471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:43.257884 containerd[1586]: time="2026-03-02T12:56:43.257743297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=48403149" Mar 2 12:56:43.260518 containerd[1586]: time="2026-03-02T12:56:43.260448847Z" level=info msg="ImageCreate event name:\"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:43.266765 containerd[1586]: time="2026-03-02T12:56:43.266652359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:43.272268 containerd[1586]: time="2026-03-02T12:56:43.268164406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 2.346020981s" Mar 2 12:56:43.272268 containerd[1586]: time="2026-03-02T12:56:43.268246329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:56:43.280202 containerd[1586]: time="2026-03-02T12:56:43.278350698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:56:43.294502 containerd[1586]: time="2026-03-02T12:56:43.294345746Z" level=info msg="CreateContainer within sandbox \"5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:56:43.351125 containerd[1586]: time="2026-03-02T12:56:43.350549312Z" level=info msg="CreateContainer within sandbox \"5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d2a023603bce58e936c1add6fd94e1cbd9be2e426ba56cf78d2fc215fcadf96b\"" Mar 2 12:56:43.352089 containerd[1586]: time="2026-03-02T12:56:43.351962066Z" level=info msg="StartContainer for \"d2a023603bce58e936c1add6fd94e1cbd9be2e426ba56cf78d2fc215fcadf96b\"" Mar 2 12:56:43.447365 containerd[1586]: time="2026-03-02T12:56:43.446621947Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:43.452229 containerd[1586]: time="2026-03-02T12:56:43.452119046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 12:56:43.457898 containerd[1586]: time="2026-03-02T12:56:43.457645129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 179.151856ms" Mar 2 12:56:43.457898 containerd[1586]: time="2026-03-02T12:56:43.457699200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:56:43.461065 containerd[1586]: time="2026-03-02T12:56:43.461033599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 12:56:43.466913 containerd[1586]: time="2026-03-02T12:56:43.466767745Z" level=info msg="CreateContainer within sandbox \"46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:56:43.534713 containerd[1586]: time="2026-03-02T12:56:43.530940578Z" level=info msg="StartContainer for \"d2a023603bce58e936c1add6fd94e1cbd9be2e426ba56cf78d2fc215fcadf96b\" returns successfully" Mar 2 12:56:43.561982 containerd[1586]: time="2026-03-02T12:56:43.561818622Z" level=info msg="CreateContainer within sandbox \"46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22e6dc4b74980eda476924de695d48803cf09bc3b86b881f6ee8fdbc0d9c704e\"" Mar 2 12:56:43.564956 containerd[1586]: time="2026-03-02T12:56:43.564747685Z" level=info msg="StartContainer for \"22e6dc4b74980eda476924de695d48803cf09bc3b86b881f6ee8fdbc0d9c704e\"" Mar 2 12:56:43.756628 containerd[1586]: time="2026-03-02T12:56:43.756558663Z" level=info msg="StartContainer for \"22e6dc4b74980eda476924de695d48803cf09bc3b86b881f6ee8fdbc0d9c704e\" returns successfully" Mar 2 12:56:44.419885 kubelet[2761]: I0302 12:56:44.418153 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-677b948c89-kzgtl" podStartSLOduration=21.283812225 podStartE2EDuration="26.418126151s" podCreationTimestamp="2026-03-02 12:56:18 +0000 UTC" firstStartedPulling="2026-03-02 12:56:38.141588209 +0000 UTC m=+41.567304239" lastFinishedPulling="2026-03-02 12:56:43.275902125 +0000 UTC m=+46.701618165" observedRunningTime="2026-03-02 12:56:44.37100546 +0000 UTC m=+47.796721500" watchObservedRunningTime="2026-03-02 12:56:44.418126151 +0000 UTC m=+47.843842201" Mar 2 12:56:44.845882 systemd-resolved[1463]: Under memory pressure, flushing caches. Mar 2 12:56:44.849673 systemd-journald[1170]: Under memory pressure, flushing caches. Mar 2 12:56:44.845921 systemd-resolved[1463]: Flushed all caches. Mar 2 12:56:45.386474 kubelet[2761]: I0302 12:56:45.385530 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:56:45.386474 kubelet[2761]: I0302 12:56:45.385684 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:56:45.873121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930039311.mount: Deactivated successfully. Mar 2 12:56:46.389101 kubelet[2761]: I0302 12:56:46.389012 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:56:46.491468 kubelet[2761]: I0302 12:56:46.490141 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-677b948c89-7z5vf" podStartSLOduration=23.336334103 podStartE2EDuration="28.490120005s" podCreationTimestamp="2026-03-02 12:56:18 +0000 UTC" firstStartedPulling="2026-03-02 12:56:38.30598452 +0000 UTC m=+41.731700551" lastFinishedPulling="2026-03-02 12:56:43.459770424 +0000 UTC m=+46.885486453" observedRunningTime="2026-03-02 12:56:44.429103477 +0000 UTC m=+47.854819598" watchObservedRunningTime="2026-03-02 12:56:46.490120005 +0000 UTC m=+49.915836065" Mar 2 12:56:46.819075 containerd[1586]: time="2026-03-02T12:56:46.818978924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:46.820489 containerd[1586]: time="2026-03-02T12:56:46.820344823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=55607954" Mar 2 12:56:46.825475 containerd[1586]: time="2026-03-02T12:56:46.825080295Z" level=info msg="ImageCreate event name:\"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:46.830584 containerd[1586]: time="2026-03-02T12:56:46.830100392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:46.832871 containerd[1586]: time="2026-03-02T12:56:46.832658064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"55607800\" in 3.371465708s" Mar 2 12:56:46.832871 containerd[1586]: time="2026-03-02T12:56:46.832732001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\"" Mar 2 12:56:46.834694 containerd[1586]: time="2026-03-02T12:56:46.834648601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 12:56:46.843927 containerd[1586]: time="2026-03-02T12:56:46.843880397Z" level=info msg="CreateContainer within sandbox \"84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 12:56:46.872044 containerd[1586]: time="2026-03-02T12:56:46.871951607Z" level=info msg="CreateContainer within sandbox \"84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"13b0479d636dca8e292741e5fe26c08765e17c3a54c362daec50795a83b9b76d\"" Mar 2 12:56:46.873023 containerd[1586]: time="2026-03-02T12:56:46.872986696Z" level=info msg="StartContainer for \"13b0479d636dca8e292741e5fe26c08765e17c3a54c362daec50795a83b9b76d\"" Mar 2 12:56:46.985650 systemd[1]: run-containerd-runc-k8s.io-13b0479d636dca8e292741e5fe26c08765e17c3a54c362daec50795a83b9b76d-runc.2TxUbd.mount: Deactivated successfully. Mar 2 12:56:47.102565 containerd[1586]: time="2026-03-02T12:56:47.101269075Z" level=info msg="StartContainer for \"13b0479d636dca8e292741e5fe26c08765e17c3a54c362daec50795a83b9b76d\" returns successfully" Mar 2 12:56:47.425343 kubelet[2761]: I0302 12:56:47.423504 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-9566f57b5-rdrh7" podStartSLOduration=20.991629358 podStartE2EDuration="29.423480549s" podCreationTimestamp="2026-03-02 12:56:18 +0000 UTC" firstStartedPulling="2026-03-02 12:56:38.402207303 +0000 UTC m=+41.827923333" lastFinishedPulling="2026-03-02 12:56:46.834058483 +0000 UTC m=+50.259774524" observedRunningTime="2026-03-02 12:56:47.422779786 +0000 UTC m=+50.848495816" watchObservedRunningTime="2026-03-02 12:56:47.423480549 +0000 UTC m=+50.849196579" Mar 2 12:56:47.735111 containerd[1586]: time="2026-03-02T12:56:47.734656747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:47.770113 containerd[1586]: time="2026-03-02T12:56:47.769702514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=6036825" Mar 2 12:56:47.770632 containerd[1586]: time="2026-03-02T12:56:47.770560663Z" level=info msg="ImageCreate event name:\"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:47.778478 containerd[1586]: time="2026-03-02T12:56:47.778361476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:47.780013 containerd[1586]: time="2026-03-02T12:56:47.779896939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7592862\" in 944.727447ms" Mar 2 12:56:47.780072 containerd[1586]: time="2026-03-02T12:56:47.780041538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\"" Mar 2 12:56:47.788298 containerd[1586]: time="2026-03-02T12:56:47.788190086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 12:56:47.799453 containerd[1586]: time="2026-03-02T12:56:47.799308018Z" level=info msg="CreateContainer within sandbox \"b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 12:56:47.872447 containerd[1586]: time="2026-03-02T12:56:47.870702857Z" level=info msg="CreateContainer within sandbox \"b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0165b23071694ce44a2e5c0f49a9ab6979d33bd76c32dd06127a139ab2698b93\"" Mar 2 12:56:47.874010 containerd[1586]: time="2026-03-02T12:56:47.873921950Z" level=info msg="StartContainer for \"0165b23071694ce44a2e5c0f49a9ab6979d33bd76c32dd06127a139ab2698b93\"" Mar 2 12:56:47.915092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220194361.mount: Deactivated successfully. Mar 2 12:56:48.008049 containerd[1586]: time="2026-03-02T12:56:48.007679016Z" level=info msg="StartContainer for \"0165b23071694ce44a2e5c0f49a9ab6979d33bd76c32dd06127a139ab2698b93\" returns successfully" Mar 2 12:56:48.733361 containerd[1586]: time="2026-03-02T12:56:48.733253361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:48.734917 containerd[1586]: time="2026-03-02T12:56:48.734809227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=14702266" Mar 2 12:56:48.736979 containerd[1586]: time="2026-03-02T12:56:48.736885028Z" level=info msg="ImageCreate event name:\"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:48.740592 containerd[1586]: time="2026-03-02T12:56:48.740489863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:48.742130 containerd[1586]: time="2026-03-02T12:56:48.742012224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"16258263\" in 953.748402ms" Mar 2 12:56:48.742130 containerd[1586]: time="2026-03-02T12:56:48.742096953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\"" Mar 2 12:56:48.746231 containerd[1586]: time="2026-03-02T12:56:48.743997995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 12:56:48.754966 containerd[1586]: time="2026-03-02T12:56:48.754917182Z" level=info msg="CreateContainer within sandbox \"0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 12:56:48.778077 containerd[1586]: time="2026-03-02T12:56:48.777980480Z" level=info msg="CreateContainer within sandbox \"0b0361dd5ac4524ff6da0aeaff01f3db4f30683f2f1e9c04fd2596104a7816ee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"80d2c196a18ff87b79baa29b9febbbc5c64d3c3baed680c1bec4669f3b810877\"" Mar 2 12:56:48.779133 containerd[1586]: time="2026-03-02T12:56:48.779025277Z" level=info msg="StartContainer for \"80d2c196a18ff87b79baa29b9febbbc5c64d3c3baed680c1bec4669f3b810877\"" Mar 2 12:56:48.893222 containerd[1586]: time="2026-03-02T12:56:48.893091528Z" level=info msg="StartContainer for \"80d2c196a18ff87b79baa29b9febbbc5c64d3c3baed680c1bec4669f3b810877\" returns successfully" Mar 2 12:56:49.983279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869930978.mount: Deactivated successfully. Mar 2 12:56:49.990657 kubelet[2761]: I0302 12:56:49.990196 2761 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 12:56:49.993998 kubelet[2761]: I0302 12:56:49.993926 2761 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 12:56:50.026595 containerd[1586]: time="2026-03-02T12:56:50.026471548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:50.028278 containerd[1586]: time="2026-03-02T12:56:50.028155267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=17599119" Mar 2 12:56:50.030687 containerd[1586]: time="2026-03-02T12:56:50.030058103Z" level=info msg="ImageCreate event name:\"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:50.034286 containerd[1586]: time="2026-03-02T12:56:50.034217679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:50.035368 containerd[1586]: time="2026-03-02T12:56:50.035278576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"17598949\" in 1.291243714s" Mar 2 12:56:50.035368 containerd[1586]: time="2026-03-02T12:56:50.035324922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\"" Mar 2 12:56:50.047074 containerd[1586]: time="2026-03-02T12:56:50.046870145Z" level=info msg="CreateContainer within sandbox \"b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 12:56:50.076920 containerd[1586]: time="2026-03-02T12:56:50.076742280Z" level=info msg="CreateContainer within sandbox \"b36d1f175193174afddd4bba2ea3f9aa7fa4f2be556b14e5f045303e8698c28b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9c9528f3ff89ffb9773600a08e8b35619146b8a4954350d0a223e003b8faf1c1\"" Mar 2 12:56:50.083330 containerd[1586]: time="2026-03-02T12:56:50.080739451Z" level=info msg="StartContainer for \"9c9528f3ff89ffb9773600a08e8b35619146b8a4954350d0a223e003b8faf1c1\"" Mar 2 12:56:50.241156 containerd[1586]: time="2026-03-02T12:56:50.240489604Z" level=info msg="StartContainer for \"9c9528f3ff89ffb9773600a08e8b35619146b8a4954350d0a223e003b8faf1c1\" returns successfully" Mar 2 12:56:50.466023 kubelet[2761]: I0302 12:56:50.464626 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lczbs" podStartSLOduration=18.385173415 podStartE2EDuration="31.464592352s" podCreationTimestamp="2026-03-02 12:56:19 +0000 UTC" firstStartedPulling="2026-03-02 12:56:35.66432561 +0000 UTC m=+39.090041640" lastFinishedPulling="2026-03-02 12:56:48.743744537 +0000 UTC m=+52.169460577" observedRunningTime="2026-03-02 12:56:49.456305316 +0000 UTC m=+52.882021356" watchObservedRunningTime="2026-03-02 12:56:50.464592352 +0000 UTC m=+53.890308382" Mar 2 12:56:56.940471 containerd[1586]: time="2026-03-02T12:56:56.940244788Z" level=info msg="StopPodSandbox for \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\"" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.106 [WARNING][5521] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9", Pod:"calico-apiserver-677b948c89-kzgtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a9004f7a01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.107 [INFO][5521] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.107 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" iface="eth0" netns="" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.108 [INFO][5521] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.108 [INFO][5521] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.587 [INFO][5530] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.587 [INFO][5530] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.587 [INFO][5530] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.597 [WARNING][5530] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.598 [INFO][5530] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.600 [INFO][5530] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:57.624880 containerd[1586]: 2026-03-02 12:56:57.605 [INFO][5521] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.632136 containerd[1586]: time="2026-03-02T12:56:57.631989437Z" level=info msg="TearDown network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\" successfully" Mar 2 12:56:57.632136 containerd[1586]: time="2026-03-02T12:56:57.632105123Z" level=info msg="StopPodSandbox for \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\" returns successfully" Mar 2 12:56:57.667211 containerd[1586]: time="2026-03-02T12:56:57.667091203Z" level=info msg="RemovePodSandbox for \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\"" Mar 2 12:56:57.667211 containerd[1586]: time="2026-03-02T12:56:57.667191981Z" level=info msg="Forcibly stopping sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\"" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.753 [WARNING][5548] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"3ed6b0b5-ade9-41b5-acf3-6e3e7fe86d8e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f5b40e9dd24a884001f1b0cca10f6eb737ce27025f7390eb19c5521ffebc9b9", Pod:"calico-apiserver-677b948c89-kzgtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a9004f7a01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.753 [INFO][5548] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.753 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" iface="eth0" netns="" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.753 [INFO][5548] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.753 [INFO][5548] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.785 [INFO][5556] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.785 [INFO][5556] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.785 [INFO][5556] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.795 [WARNING][5556] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.795 [INFO][5556] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" HandleID="k8s-pod-network.3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Workload="localhost-k8s-calico--apiserver--677b948c89--kzgtl-eth0" Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.799 [INFO][5556] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:57.806030 containerd[1586]: 2026-03-02 12:56:57.802 [INFO][5548] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a" Mar 2 12:56:57.806030 containerd[1586]: time="2026-03-02T12:56:57.806008723Z" level=info msg="TearDown network for sandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\" successfully" Mar 2 12:56:57.852204 containerd[1586]: time="2026-03-02T12:56:57.852068366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:56:57.852454 containerd[1586]: time="2026-03-02T12:56:57.852308474Z" level=info msg="RemovePodSandbox \"3e5fad1a28bae8a5744440012ce995d56e65b959fc9151696df6da93b867024a\" returns successfully" Mar 2 12:56:57.855778 containerd[1586]: time="2026-03-02T12:56:57.855724056Z" level=info msg="StopPodSandbox for \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\"" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:57.971 [WARNING][5574] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--rdrh7-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"c27ea198-031e-421a-9756-e262b0869b53", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a", Pod:"goldmane-9566f57b5-rdrh7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e35b2e2e78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:57.972 [INFO][5574] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:57.973 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" iface="eth0" netns="" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:57.973 [INFO][5574] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:57.973 [INFO][5574] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:58.010 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:58.010 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:58.010 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:58.025 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:58.025 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:58.028 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:58.039579 containerd[1586]: 2026-03-02 12:56:58.032 [INFO][5574] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.039579 containerd[1586]: time="2026-03-02T12:56:58.037064512Z" level=info msg="TearDown network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\" successfully" Mar 2 12:56:58.039579 containerd[1586]: time="2026-03-02T12:56:58.037102694Z" level=info msg="StopPodSandbox for \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\" returns successfully" Mar 2 12:56:58.039579 containerd[1586]: time="2026-03-02T12:56:58.037943983Z" level=info msg="RemovePodSandbox for \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\"" Mar 2 12:56:58.039579 containerd[1586]: time="2026-03-02T12:56:58.037984088Z" level=info msg="Forcibly stopping sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\"" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.103 [WARNING][5602] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--rdrh7-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"c27ea198-031e-421a-9756-e262b0869b53", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84995e41756fbff3b59507afe181c651713d0aff8e1a629cf34c7e6a9804fc3a", Pod:"goldmane-9566f57b5-rdrh7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e35b2e2e78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.103 [INFO][5602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.104 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" iface="eth0" netns="" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.104 [INFO][5602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.104 [INFO][5602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.144 [INFO][5611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.145 [INFO][5611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.145 [INFO][5611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.153 [WARNING][5611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.153 [INFO][5611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" HandleID="k8s-pod-network.9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Workload="localhost-k8s-goldmane--9566f57b5--rdrh7-eth0" Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.156 [INFO][5611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:58.164790 containerd[1586]: 2026-03-02 12:56:58.160 [INFO][5602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f" Mar 2 12:56:58.164790 containerd[1586]: time="2026-03-02T12:56:58.164880569Z" level=info msg="TearDown network for sandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\" successfully" Mar 2 12:56:58.172358 containerd[1586]: time="2026-03-02T12:56:58.172224852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:56:58.172358 containerd[1586]: time="2026-03-02T12:56:58.172328515Z" level=info msg="RemovePodSandbox \"9ed19a3d88beeefe77ccd860b531aff7c83b62838f1359a9c07e387bfae2795f\" returns successfully" Mar 2 12:56:58.173283 containerd[1586]: time="2026-03-02T12:56:58.173211494Z" level=info msg="StopPodSandbox for \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\"" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.258 [WARNING][5628] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0", GenerateName:"calico-kube-controllers-6c5c78c78-", Namespace:"calico-system", SelfLink:"", UID:"20530e0e-7523-46f2-bf7b-30bc40bef15b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5c78c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799", Pod:"calico-kube-controllers-6c5c78c78-476fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37e29ad0ada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.258 [INFO][5628] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.259 [INFO][5628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" iface="eth0" netns="" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.259 [INFO][5628] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.259 [INFO][5628] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.302 [INFO][5636] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.303 [INFO][5636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.303 [INFO][5636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.314 [WARNING][5636] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.315 [INFO][5636] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.317 [INFO][5636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:58.325482 containerd[1586]: 2026-03-02 12:56:58.321 [INFO][5628] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.326303 containerd[1586]: time="2026-03-02T12:56:58.325620108Z" level=info msg="TearDown network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\" successfully" Mar 2 12:56:58.326303 containerd[1586]: time="2026-03-02T12:56:58.325729282Z" level=info msg="StopPodSandbox for \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\" returns successfully" Mar 2 12:56:58.327148 containerd[1586]: time="2026-03-02T12:56:58.327079672Z" level=info msg="RemovePodSandbox for \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\"" Mar 2 12:56:58.327148 containerd[1586]: time="2026-03-02T12:56:58.327147259Z" level=info msg="Forcibly stopping sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\"" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.389 [WARNING][5654] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0", GenerateName:"calico-kube-controllers-6c5c78c78-", Namespace:"calico-system", SelfLink:"", UID:"20530e0e-7523-46f2-bf7b-30bc40bef15b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5c78c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"162da9259e68bcc2adcb7d9d8eb8cb290c79680fc47c7f6cc19e636b8146e799", Pod:"calico-kube-controllers-6c5c78c78-476fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37e29ad0ada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.390 [INFO][5654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.390 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" iface="eth0" netns="" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.390 [INFO][5654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.390 [INFO][5654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.438 [INFO][5664] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.438 [INFO][5664] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.438 [INFO][5664] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.448 [WARNING][5664] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.448 [INFO][5664] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" HandleID="k8s-pod-network.76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Workload="localhost-k8s-calico--kube--controllers--6c5c78c78--476fj-eth0" Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.452 [INFO][5664] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:58.459924 containerd[1586]: 2026-03-02 12:56:58.456 [INFO][5654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241" Mar 2 12:56:58.461370 containerd[1586]: time="2026-03-02T12:56:58.459958174Z" level=info msg="TearDown network for sandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\" successfully" Mar 2 12:56:58.466492 containerd[1586]: time="2026-03-02T12:56:58.466352771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:56:58.466563 containerd[1586]: time="2026-03-02T12:56:58.466516057Z" level=info msg="RemovePodSandbox \"76890fb103a9110d52807e727d9ba25c6acf3892bc69c0c2d98f17f3c901e241\" returns successfully" Mar 2 12:56:58.467644 containerd[1586]: time="2026-03-02T12:56:58.467593986Z" level=info msg="StopPodSandbox for \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\"" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.529 [WARNING][5681] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" WorkloadEndpoint="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.529 [INFO][5681] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.529 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" iface="eth0" netns="" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.529 [INFO][5681] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.529 [INFO][5681] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.569 [INFO][5689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.569 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.569 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.577 [WARNING][5689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.577 [INFO][5689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.580 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:58.591189 containerd[1586]: 2026-03-02 12:56:58.586 [INFO][5681] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.591189 containerd[1586]: time="2026-03-02T12:56:58.591082596Z" level=info msg="TearDown network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\" successfully" Mar 2 12:56:58.591189 containerd[1586]: time="2026-03-02T12:56:58.591134874Z" level=info msg="StopPodSandbox for \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\" returns successfully" Mar 2 12:56:58.594595 containerd[1586]: time="2026-03-02T12:56:58.592887034Z" level=info msg="RemovePodSandbox for \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\"" Mar 2 12:56:58.594595 containerd[1586]: time="2026-03-02T12:56:58.592928881Z" level=info msg="Forcibly stopping sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\"" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.673 [WARNING][5705] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" WorkloadEndpoint="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.674 [INFO][5705] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.674 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" iface="eth0" netns="" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.674 [INFO][5705] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.674 [INFO][5705] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.727 [INFO][5713] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.727 [INFO][5713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.727 [INFO][5713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.736 [WARNING][5713] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.736 [INFO][5713] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" HandleID="k8s-pod-network.57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Workload="localhost-k8s-whisker--85c99cfd46--7khdv-eth0" Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.740 [INFO][5713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:58.748605 containerd[1586]: 2026-03-02 12:56:58.744 [INFO][5705] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54" Mar 2 12:56:58.749530 containerd[1586]: time="2026-03-02T12:56:58.748990533Z" level=info msg="TearDown network for sandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\" successfully" Mar 2 12:56:58.756201 containerd[1586]: time="2026-03-02T12:56:58.756118934Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:56:58.756334 containerd[1586]: time="2026-03-02T12:56:58.756269895Z" level=info msg="RemovePodSandbox \"57945546f7840a84e2113f0850dac07376b43d8b6d464903f6ad0635c476db54\" returns successfully" Mar 2 12:56:58.757064 containerd[1586]: time="2026-03-02T12:56:58.756989221Z" level=info msg="StopPodSandbox for \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\"" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.835 [WARNING][5730] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"57fb18fa-6d09-4965-b41b-6c5cac95f136", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53", Pod:"coredns-674b8bbfcf-sxcg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a914c9b81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.835 [INFO][5730] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.835 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" iface="eth0" netns="" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.835 [INFO][5730] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.836 [INFO][5730] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.877 [INFO][5738] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.877 [INFO][5738] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.877 [INFO][5738] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.887 [WARNING][5738] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.887 [INFO][5738] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.890 [INFO][5738] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:58.896872 containerd[1586]: 2026-03-02 12:56:58.893 [INFO][5730] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:58.898693 containerd[1586]: time="2026-03-02T12:56:58.896877892Z" level=info msg="TearDown network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\" successfully" Mar 2 12:56:58.898693 containerd[1586]: time="2026-03-02T12:56:58.896919210Z" level=info msg="StopPodSandbox for \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\" returns successfully" Mar 2 12:56:58.898693 containerd[1586]: time="2026-03-02T12:56:58.897847312Z" level=info msg="RemovePodSandbox for \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\"" Mar 2 12:56:58.898693 containerd[1586]: time="2026-03-02T12:56:58.897882438Z" level=info msg="Forcibly stopping sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\"" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:58.974 [WARNING][5755] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"57fb18fa-6d09-4965-b41b-6c5cac95f136", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad9224660117ae806a790833ccd0d90e32384028064ad771cf4902018bdb7c53", Pod:"coredns-674b8bbfcf-sxcg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a914c9b81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:58.974 [INFO][5755] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:58.974 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" iface="eth0" netns="" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:58.974 [INFO][5755] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:58.974 [INFO][5755] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:59.013 [INFO][5763] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:59.013 [INFO][5763] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:59.013 [INFO][5763] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:59.029 [WARNING][5763] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:59.029 [INFO][5763] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" HandleID="k8s-pod-network.2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Workload="localhost-k8s-coredns--674b8bbfcf--sxcg5-eth0" Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:59.031 [INFO][5763] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:59.038220 containerd[1586]: 2026-03-02 12:56:59.034 [INFO][5755] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd" Mar 2 12:56:59.038758 containerd[1586]: time="2026-03-02T12:56:59.038200559Z" level=info msg="TearDown network for sandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\" successfully" Mar 2 12:56:59.043126 containerd[1586]: time="2026-03-02T12:56:59.043043873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:56:59.043539 containerd[1586]: time="2026-03-02T12:56:59.043126177Z" level=info msg="RemovePodSandbox \"2f65bb0e1e587535c8ac6d42dbe3182446f9f4490373bebe3167039dc2fa70dd\" returns successfully" Mar 2 12:56:59.044008 containerd[1586]: time="2026-03-02T12:56:59.043951533Z" level=info msg="StopPodSandbox for \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\"" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.094 [WARNING][5780] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"5a848f00-f6ec-4385-a50f-239a27273d12", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13", Pod:"calico-apiserver-677b948c89-7z5vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5933164bc83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.094 [INFO][5780] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.094 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" iface="eth0" netns="" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.094 [INFO][5780] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.094 [INFO][5780] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.136 [INFO][5788] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.136 [INFO][5788] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.136 [INFO][5788] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.145 [WARNING][5788] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.145 [INFO][5788] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.149 [INFO][5788] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:59.155835 containerd[1586]: 2026-03-02 12:56:59.152 [INFO][5780] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.156475 containerd[1586]: time="2026-03-02T12:56:59.155897507Z" level=info msg="TearDown network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\" successfully" Mar 2 12:56:59.156475 containerd[1586]: time="2026-03-02T12:56:59.155928666Z" level=info msg="StopPodSandbox for \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\" returns successfully" Mar 2 12:56:59.156553 containerd[1586]: time="2026-03-02T12:56:59.156512564Z" level=info msg="RemovePodSandbox for \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\"" Mar 2 12:56:59.156553 containerd[1586]: time="2026-03-02T12:56:59.156541027Z" level=info msg="Forcibly stopping sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\"" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.211 [WARNING][5807] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0", GenerateName:"calico-apiserver-677b948c89-", Namespace:"calico-system", SelfLink:"", UID:"5a848f00-f6ec-4385-a50f-239a27273d12", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677b948c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46532ba19a9027f3f4eb6aed1b6e59bd274f423ece8da1277f702c0bb77bcd13", Pod:"calico-apiserver-677b948c89-7z5vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5933164bc83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.212 [INFO][5807] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.212 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" iface="eth0" netns="" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.212 [INFO][5807] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.212 [INFO][5807] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.250 [INFO][5816] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.250 [INFO][5816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.250 [INFO][5816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.261 [WARNING][5816] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.262 [INFO][5816] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" HandleID="k8s-pod-network.73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Workload="localhost-k8s-calico--apiserver--677b948c89--7z5vf-eth0" Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.264 [INFO][5816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:59.270930 containerd[1586]: 2026-03-02 12:56:59.267 [INFO][5807] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6" Mar 2 12:56:59.271658 containerd[1586]: time="2026-03-02T12:56:59.270967156Z" level=info msg="TearDown network for sandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\" successfully" Mar 2 12:56:59.277122 containerd[1586]: time="2026-03-02T12:56:59.277021525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:56:59.277297 containerd[1586]: time="2026-03-02T12:56:59.277184840Z" level=info msg="RemovePodSandbox \"73344e9c822e1d25f65f55584932f858e5e5874ef23f7c260e730ed8aedd6fe6\" returns successfully" Mar 2 12:56:59.278217 containerd[1586]: time="2026-03-02T12:56:59.278187307Z" level=info msg="StopPodSandbox for \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\"" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.344 [WARNING][5834] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73b727e3-efa9-44aa-a2c1-a5653c8e04db", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49", Pod:"coredns-674b8bbfcf-cv6q7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b7bdfc1d63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.344 [INFO][5834] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.344 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" iface="eth0" netns="" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.344 [INFO][5834] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.344 [INFO][5834] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.375 [INFO][5842] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.375 [INFO][5842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.375 [INFO][5842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.384 [WARNING][5842] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.384 [INFO][5842] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.386 [INFO][5842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:59.392724 containerd[1586]: 2026-03-02 12:56:59.389 [INFO][5834] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.392724 containerd[1586]: time="2026-03-02T12:56:59.392708883Z" level=info msg="TearDown network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\" successfully" Mar 2 12:56:59.392724 containerd[1586]: time="2026-03-02T12:56:59.392735633Z" level=info msg="StopPodSandbox for \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\" returns successfully" Mar 2 12:56:59.393829 containerd[1586]: time="2026-03-02T12:56:59.393535274Z" level=info msg="RemovePodSandbox for \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\"" Mar 2 12:56:59.393829 containerd[1586]: time="2026-03-02T12:56:59.393565290Z" level=info msg="Forcibly stopping sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\"" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.457 [WARNING][5861] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73b727e3-efa9-44aa-a2c1-a5653c8e04db", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d2222df079fc142ba53fd3504b45d0a1455e2bba029e081d204811ea3aec49", Pod:"coredns-674b8bbfcf-cv6q7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b7bdfc1d63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.458 [INFO][5861] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.458 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" iface="eth0" netns="" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.458 [INFO][5861] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.458 [INFO][5861] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.508 [INFO][5869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.508 [INFO][5869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.508 [INFO][5869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.520 [WARNING][5869] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.520 [INFO][5869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" HandleID="k8s-pod-network.8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Workload="localhost-k8s-coredns--674b8bbfcf--cv6q7-eth0" Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.523 [INFO][5869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:56:59.531569 containerd[1586]: 2026-03-02 12:56:59.527 [INFO][5861] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55" Mar 2 12:56:59.531569 containerd[1586]: time="2026-03-02T12:56:59.531497426Z" level=info msg="TearDown network for sandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\" successfully" Mar 2 12:56:59.538371 containerd[1586]: time="2026-03-02T12:56:59.538286316Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:56:59.538494 containerd[1586]: time="2026-03-02T12:56:59.538467874Z" level=info msg="RemovePodSandbox \"8b306779096bc0b9171ac1e1cc700dfc1b53b318a3caa259cfd180b09da88d55\" returns successfully" Mar 2 12:57:04.847549 kubelet[2761]: E0302 12:57:04.845961 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:04.846136 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:43378.service - OpenSSH per-connection server daemon (10.0.0.1:43378). Mar 2 12:57:04.975351 sshd[5898]: Accepted publickey for core from 10.0.0.1 port 43378 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:04.978993 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:04.992313 systemd-logind[1562]: New session 10 of user core. Mar 2 12:57:05.000786 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 12:57:06.334745 sshd[5898]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:06.344153 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:43378.service: Deactivated successfully. Mar 2 12:57:06.352316 kubelet[2761]: I0302 12:57:06.352153 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7c55479bf8-7blqb" podStartSLOduration=18.742552238000002 podStartE2EDuration="30.352133017s" podCreationTimestamp="2026-03-02 12:56:36 +0000 UTC" firstStartedPulling="2026-03-02 12:56:38.428628822 +0000 UTC m=+41.854344852" lastFinishedPulling="2026-03-02 12:56:50.0382096 +0000 UTC m=+53.463925631" observedRunningTime="2026-03-02 12:56:50.483519575 +0000 UTC m=+53.909235605" watchObservedRunningTime="2026-03-02 12:57:06.352133017 +0000 UTC m=+69.777849047" Mar 2 12:57:06.354141 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 12:57:06.387641 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Mar 2 12:57:06.390521 systemd-logind[1562]: Removed session 10. Mar 2 12:57:06.864537 systemd-journald[1170]: Under memory pressure, flushing caches. Mar 2 12:57:06.860691 systemd-resolved[1463]: Under memory pressure, flushing caches. Mar 2 12:57:06.860751 systemd-resolved[1463]: Flushed all caches. Mar 2 12:57:08.911202 systemd-resolved[1463]: Under memory pressure, flushing caches. Mar 2 12:57:08.921761 systemd-journald[1170]: Under memory pressure, flushing caches. Mar 2 12:57:08.911214 systemd-resolved[1463]: Flushed all caches. Mar 2 12:57:11.339242 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:43384.service - OpenSSH per-connection server daemon (10.0.0.1:43384). Mar 2 12:57:11.389215 systemd[1]: run-containerd-runc-k8s.io-13b0479d636dca8e292741e5fe26c08765e17c3a54c362daec50795a83b9b76d-runc.LPhTpg.mount: Deactivated successfully. Mar 2 12:57:11.402070 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 43384 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:11.405604 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:11.422871 systemd-logind[1562]: New session 11 of user core. Mar 2 12:57:11.429993 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 12:57:11.643293 sshd[5954]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:11.649720 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:43384.service: Deactivated successfully. Mar 2 12:57:11.653647 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Mar 2 12:57:11.653748 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 12:57:11.655752 systemd-logind[1562]: Removed session 11. Mar 2 12:57:13.748109 kubelet[2761]: E0302 12:57:13.747996 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:16.664880 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:48638.service - OpenSSH per-connection server daemon (10.0.0.1:48638). Mar 2 12:57:16.704085 sshd[6018]: Accepted publickey for core from 10.0.0.1 port 48638 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:16.706547 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:16.713502 systemd-logind[1562]: New session 12 of user core. Mar 2 12:57:16.717714 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 12:57:16.857627 sshd[6018]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:16.862929 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:48638.service: Deactivated successfully. Mar 2 12:57:16.866745 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 12:57:16.867698 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Mar 2 12:57:16.869503 systemd-logind[1562]: Removed session 12. Mar 2 12:57:20.747592 kubelet[2761]: E0302 12:57:20.747467 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:21.879016 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:48644.service - OpenSSH per-connection server daemon (10.0.0.1:48644). Mar 2 12:57:21.933649 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 48644 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:21.936645 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:21.944693 systemd-logind[1562]: New session 13 of user core. Mar 2 12:57:21.954879 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 12:57:22.168963 sshd[6073]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:22.175615 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:48644.service: Deactivated successfully. Mar 2 12:57:22.179957 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 12:57:22.179962 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Mar 2 12:57:22.182611 systemd-logind[1562]: Removed session 13. Mar 2 12:57:25.445188 kubelet[2761]: E0302 12:57:25.445063 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:27.187678 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:52778.service - OpenSSH per-connection server daemon (10.0.0.1:52778). Mar 2 12:57:27.261434 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 52778 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:27.264564 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:27.271987 systemd-logind[1562]: New session 14 of user core. Mar 2 12:57:27.280947 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 12:57:27.506678 sshd[6089]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:27.533726 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:52778.service: Deactivated successfully. Mar 2 12:57:27.538576 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Mar 2 12:57:27.538847 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 12:57:27.541871 systemd-logind[1562]: Removed session 14. Mar 2 12:57:32.521967 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:49546.service - OpenSSH per-connection server daemon (10.0.0.1:49546). Mar 2 12:57:32.575457 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 49546 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:32.578230 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:32.584681 systemd-logind[1562]: New session 15 of user core. Mar 2 12:57:32.590873 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 12:57:32.747969 sshd[6106]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:32.752963 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:49546.service: Deactivated successfully. Mar 2 12:57:32.756881 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Mar 2 12:57:32.757053 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 12:57:32.758904 systemd-logind[1562]: Removed session 15. Mar 2 12:57:32.871286 kubelet[2761]: I0302 12:57:32.871096 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:57:37.760672 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:49562.service - OpenSSH per-connection server daemon (10.0.0.1:49562). Mar 2 12:57:37.835833 sshd[6149]: Accepted publickey for core from 10.0.0.1 port 49562 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:37.838511 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:37.845746 systemd-logind[1562]: New session 16 of user core. Mar 2 12:57:37.855935 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 12:57:38.065936 sshd[6149]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:38.069941 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:49562.service: Deactivated successfully. Mar 2 12:57:38.073968 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Mar 2 12:57:38.074116 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 12:57:38.075961 systemd-logind[1562]: Removed session 16. Mar 2 12:57:39.747694 kubelet[2761]: E0302 12:57:39.747518 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:43.076681 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:60426.service - OpenSSH per-connection server daemon (10.0.0.1:60426). Mar 2 12:57:43.129525 sshd[6185]: Accepted publickey for core from 10.0.0.1 port 60426 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:43.132212 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:43.139241 systemd-logind[1562]: New session 17 of user core. Mar 2 12:57:43.150927 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 12:57:43.336919 sshd[6185]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:43.342221 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:60426.service: Deactivated successfully. Mar 2 12:57:43.345449 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Mar 2 12:57:43.345716 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 12:57:43.347766 systemd-logind[1562]: Removed session 17. Mar 2 12:57:48.355793 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:60432.service - OpenSSH per-connection server daemon (10.0.0.1:60432). Mar 2 12:57:48.391122 sshd[6221]: Accepted publickey for core from 10.0.0.1 port 60432 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:48.393762 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:48.403231 systemd-logind[1562]: New session 18 of user core. Mar 2 12:57:48.409909 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 12:57:48.551719 sshd[6221]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:48.562661 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:60436.service - OpenSSH per-connection server daemon (10.0.0.1:60436). Mar 2 12:57:48.563212 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:60432.service: Deactivated successfully. Mar 2 12:57:48.567531 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Mar 2 12:57:48.569209 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 12:57:48.570989 systemd-logind[1562]: Removed session 18. Mar 2 12:57:48.606744 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 60436 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:48.610315 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:48.617671 systemd-logind[1562]: New session 19 of user core. Mar 2 12:57:48.624708 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 12:57:48.847248 sshd[6235]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:48.863892 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:60448.service - OpenSSH per-connection server daemon (10.0.0.1:60448). Mar 2 12:57:48.868004 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:60436.service: Deactivated successfully. Mar 2 12:57:48.872636 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 12:57:48.876100 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Mar 2 12:57:48.879536 systemd-logind[1562]: Removed session 19. Mar 2 12:57:48.919678 sshd[6249]: Accepted publickey for core from 10.0.0.1 port 60448 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:48.922206 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:48.929774 systemd-logind[1562]: New session 20 of user core. Mar 2 12:57:48.939933 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 12:57:49.125127 sshd[6249]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:49.129618 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:60448.service: Deactivated successfully. Mar 2 12:57:49.132570 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 12:57:49.132738 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Mar 2 12:57:49.134791 systemd-logind[1562]: Removed session 20. Mar 2 12:57:49.748942 kubelet[2761]: E0302 12:57:49.748327 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:54.150012 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:32816.service - OpenSSH per-connection server daemon (10.0.0.1:32816). Mar 2 12:57:54.189273 sshd[6290]: Accepted publickey for core from 10.0.0.1 port 32816 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:54.191666 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:54.199285 systemd-logind[1562]: New session 21 of user core. Mar 2 12:57:54.204945 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 12:57:54.354652 sshd[6290]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:54.360617 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:32816.service: Deactivated successfully. Mar 2 12:57:54.363739 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 12:57:54.363905 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Mar 2 12:57:54.367348 systemd-logind[1562]: Removed session 21. Mar 2 12:57:59.370894 systemd[1]: Started sshd@21-10.0.0.20:22-10.0.0.1:32818.service - OpenSSH per-connection server daemon (10.0.0.1:32818). Mar 2 12:57:59.418566 sshd[6310]: Accepted publickey for core from 10.0.0.1 port 32818 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:59.421298 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:59.428195 systemd-logind[1562]: New session 22 of user core. Mar 2 12:57:59.437868 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 12:57:59.575744 sshd[6310]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:59.586654 systemd[1]: Started sshd@22-10.0.0.20:22-10.0.0.1:32826.service - OpenSSH per-connection server daemon (10.0.0.1:32826). Mar 2 12:57:59.587292 systemd[1]: sshd@21-10.0.0.20:22-10.0.0.1:32818.service: Deactivated successfully. Mar 2 12:57:59.590623 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 12:57:59.592967 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Mar 2 12:57:59.594962 systemd-logind[1562]: Removed session 22. Mar 2 12:57:59.634092 sshd[6335]: Accepted publickey for core from 10.0.0.1 port 32826 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:59.636717 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:59.644063 systemd-logind[1562]: New session 23 of user core. Mar 2 12:57:59.656776 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 12:58:00.051876 sshd[6335]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:00.061740 systemd[1]: Started sshd@23-10.0.0.20:22-10.0.0.1:32830.service - OpenSSH per-connection server daemon (10.0.0.1:32830). Mar 2 12:58:00.062921 systemd[1]: sshd@22-10.0.0.20:22-10.0.0.1:32826.service: Deactivated successfully. Mar 2 12:58:00.066560 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Mar 2 12:58:00.067975 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 12:58:00.069316 systemd-logind[1562]: Removed session 23. Mar 2 12:58:00.112658 sshd[6347]: Accepted publickey for core from 10.0.0.1 port 32830 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:58:00.115456 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:00.121465 systemd-logind[1562]: New session 24 of user core. Mar 2 12:58:00.131729 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 12:58:00.891487 sshd[6347]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:00.902526 systemd[1]: Started sshd@24-10.0.0.20:22-10.0.0.1:32836.service - OpenSSH per-connection server daemon (10.0.0.1:32836). Mar 2 12:58:00.903216 systemd[1]: sshd@23-10.0.0.20:22-10.0.0.1:32830.service: Deactivated successfully. Mar 2 12:58:00.913268 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 12:58:00.915801 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Mar 2 12:58:00.924274 systemd-logind[1562]: Removed session 24. Mar 2 12:58:00.968540 sshd[6373]: Accepted publickey for core from 10.0.0.1 port 32836 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:58:00.970541 sshd[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:00.977219 systemd-logind[1562]: New session 25 of user core. Mar 2 12:58:00.985777 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 12:58:01.604329 sshd[6373]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:01.621893 systemd[1]: Started sshd@25-10.0.0.20:22-10.0.0.1:32844.service - OpenSSH per-connection server daemon (10.0.0.1:32844). Mar 2 12:58:01.624793 systemd[1]: sshd@24-10.0.0.20:22-10.0.0.1:32836.service: Deactivated successfully. Mar 2 12:58:01.629209 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 12:58:01.637505 systemd-logind[1562]: Session 25 logged out. Waiting for processes to exit. Mar 2 12:58:01.641707 systemd-logind[1562]: Removed session 25. Mar 2 12:58:01.669488 sshd[6389]: Accepted publickey for core from 10.0.0.1 port 32844 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:58:01.672326 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:01.681071 systemd-logind[1562]: New session 26 of user core. Mar 2 12:58:01.690130 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 12:58:01.878332 sshd[6389]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:01.889151 systemd[1]: sshd@25-10.0.0.20:22-10.0.0.1:32844.service: Deactivated successfully. Mar 2 12:58:01.892638 systemd-logind[1562]: Session 26 logged out. Waiting for processes to exit. Mar 2 12:58:01.892711 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 12:58:01.895949 systemd-logind[1562]: Removed session 26. Mar 2 12:58:06.890874 systemd[1]: Started sshd@26-10.0.0.20:22-10.0.0.1:34478.service - OpenSSH per-connection server daemon (10.0.0.1:34478). Mar 2 12:58:06.936602 sshd[6431]: Accepted publickey for core from 10.0.0.1 port 34478 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:58:06.939345 sshd[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:06.946977 systemd-logind[1562]: New session 27 of user core. Mar 2 12:58:06.954048 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 12:58:07.155494 sshd[6431]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:07.162352 systemd[1]: sshd@26-10.0.0.20:22-10.0.0.1:34478.service: Deactivated successfully. Mar 2 12:58:07.168290 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 12:58:07.169639 systemd-logind[1562]: Session 27 logged out. Waiting for processes to exit. Mar 2 12:58:07.171513 systemd-logind[1562]: Removed session 27. Mar 2 12:58:10.748237 kubelet[2761]: E0302 12:58:10.748050 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:12.174948 systemd[1]: Started sshd@27-10.0.0.20:22-10.0.0.1:46540.service - OpenSSH per-connection server daemon (10.0.0.1:46540). Mar 2 12:58:12.234147 sshd[6529]: Accepted publickey for core from 10.0.0.1 port 46540 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:58:12.237059 sshd[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:12.243719 systemd-logind[1562]: New session 28 of user core. Mar 2 12:58:12.248756 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 12:58:12.447659 sshd[6529]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:12.454495 systemd[1]: sshd@27-10.0.0.20:22-10.0.0.1:46540.service: Deactivated successfully. Mar 2 12:58:12.457294 systemd-logind[1562]: Session 28 logged out. Waiting for processes to exit. Mar 2 12:58:12.457358 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 12:58:12.459933 systemd-logind[1562]: Removed session 28. Mar 2 12:58:17.462964 systemd[1]: Started sshd@28-10.0.0.20:22-10.0.0.1:46552.service - OpenSSH per-connection server daemon (10.0.0.1:46552). Mar 2 12:58:17.516455 sshd[6562]: Accepted publickey for core from 10.0.0.1 port 46552 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:58:17.519284 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:17.526470 systemd-logind[1562]: New session 29 of user core. Mar 2 12:58:17.533748 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 12:58:17.736047 sshd[6562]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:17.742349 systemd[1]: sshd@28-10.0.0.20:22-10.0.0.1:46552.service: Deactivated successfully. Mar 2 12:58:17.747564 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 12:58:17.747571 systemd-logind[1562]: Session 29 logged out. Waiting for processes to exit. Mar 2 12:58:17.750107 systemd-logind[1562]: Removed session 29. Mar 2 12:58:19.466455 systemd[1]: run-containerd-runc-k8s.io-13b0479d636dca8e292741e5fe26c08765e17c3a54c362daec50795a83b9b76d-runc.LxnLyC.mount: Deactivated successfully. Mar 2 12:58:20.749169 kubelet[2761]: E0302 12:58:20.748599 2761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:22.750794 systemd[1]: Started sshd@29-10.0.0.20:22-10.0.0.1:41304.service - OpenSSH per-connection server daemon (10.0.0.1:41304). Mar 2 12:58:22.797290 sshd[6599]: Accepted publickey for core from 10.0.0.1 port 41304 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:58:22.799676 sshd[6599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:22.806177 systemd-logind[1562]: New session 30 of user core. Mar 2 12:58:22.826958 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 12:58:23.060764 sshd[6599]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:23.066734 systemd[1]: sshd@29-10.0.0.20:22-10.0.0.1:41304.service: Deactivated successfully. Mar 2 12:58:23.071166 systemd-logind[1562]: Session 30 logged out. Waiting for processes to exit. Mar 2 12:58:23.071189 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 12:58:23.074134 systemd-logind[1562]: Removed session 30.