Mar 2 12:57:26.227199 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 12:57:26.227234 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:57:26.227253 kernel: BIOS-provided physical RAM map: Mar 2 12:57:26.227264 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 12:57:26.227274 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 12:57:26.227284 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 12:57:26.227295 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 12:57:26.227306 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 12:57:26.227316 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 2 12:57:26.227326 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 2 12:57:26.227341 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 2 12:57:26.227352 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 2 12:57:26.227362 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 2 12:57:26.227429 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 2 12:57:26.227443 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 2 12:57:26.227455 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 12:57:26.227472 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 2 12:57:26.227483 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 2 12:57:26.227495 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 12:57:26.227506 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 12:57:26.227517 kernel: NX (Execute Disable) protection: active Mar 2 12:57:26.227528 kernel: APIC: Static calls initialized Mar 2 12:57:26.227540 kernel: efi: EFI v2.7 by EDK II Mar 2 12:57:26.227551 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 2 12:57:26.227563 kernel: SMBIOS 2.8 present. Mar 2 12:57:26.227574 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 2 12:57:26.227585 kernel: Hypervisor detected: KVM Mar 2 12:57:26.227601 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 12:57:26.227613 kernel: kvm-clock: using sched offset of 7434973393 cycles Mar 2 12:57:26.227624 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 12:57:26.227636 kernel: tsc: Detected 2445.424 MHz processor Mar 2 12:57:26.227648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 12:57:26.227660 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 12:57:26.227672 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 2 12:57:26.227684 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 12:57:26.227696 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 12:57:26.227712 kernel: Using GB pages for direct mapping Mar 2 12:57:26.227723 kernel: Secure boot disabled Mar 2 12:57:26.227733 kernel: ACPI: Early table checksum verification disabled Mar 2 12:57:26.227745 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 12:57:26.227763 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 12:57:26.227776 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:57:26.227788 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:57:26.227804 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 12:57:26.227817 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:57:26.227829 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:57:26.227842 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:57:26.227854 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:57:26.227867 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 12:57:26.227879 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 12:57:26.227896 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 12:57:26.227908 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 12:57:26.227920 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 12:57:26.227932 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 12:57:26.227944 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 12:57:26.227956 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 12:57:26.227968 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 12:57:26.227980 kernel: No NUMA configuration found Mar 2 12:57:26.227992 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 2 12:57:26.228009 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 2 12:57:26.228021 kernel: Zone ranges: Mar 2 12:57:26.228033 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 12:57:26.228045 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 2 12:57:26.228058 kernel: Normal empty Mar 2 12:57:26.228070 kernel: Movable zone start for each node Mar 2 12:57:26.228118 kernel: Early memory node ranges Mar 2 12:57:26.228132 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 12:57:26.228144 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 12:57:26.228156 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 12:57:26.228172 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 2 12:57:26.228184 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 2 12:57:26.228196 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 2 12:57:26.228207 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 2 12:57:26.228219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:57:26.228232 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 12:57:26.228244 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 12:57:26.228257 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:57:26.228269 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 2 12:57:26.228285 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 2 12:57:26.228326 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 2 12:57:26.228339 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 12:57:26.228351 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 12:57:26.228363 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 12:57:26.228457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 12:57:26.228470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 12:57:26.228483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 12:57:26.228517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 12:57:26.228535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 12:57:26.228587 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 12:57:26.228600 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 12:57:26.228611 kernel: TSC deadline timer available Mar 2 12:57:26.228622 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 12:57:26.228633 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 12:57:26.228668 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 12:57:26.228681 kernel: kvm-guest: setup PV sched yield Mar 2 12:57:26.228693 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 2 12:57:26.228706 kernel: Booting paravirtualized kernel on KVM Mar 2 12:57:26.228723 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 12:57:26.228736 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 12:57:26.228748 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 12:57:26.228760 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 12:57:26.228773 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 12:57:26.228785 kernel: kvm-guest: PV spinlocks enabled Mar 2 12:57:26.228797 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 12:57:26.228811 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:57:26.228828 kernel: random: crng init done Mar 2 12:57:26.228840 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 12:57:26.228852 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 12:57:26.228864 kernel: Fallback order for Node 0: 0 Mar 2 12:57:26.228876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 2 12:57:26.228888 kernel: Policy zone: DMA32 Mar 2 12:57:26.228900 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 12:57:26.228913 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 2 12:57:26.228926 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 12:57:26.228942 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 12:57:26.228954 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 12:57:26.228966 kernel: Dynamic Preempt: voluntary Mar 2 12:57:26.228979 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 12:57:26.229005 kernel: rcu: RCU event tracing is enabled. Mar 2 12:57:26.229022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 12:57:26.229035 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 12:57:26.229046 kernel: Rude variant of Tasks RCU enabled. Mar 2 12:57:26.229059 kernel: Tracing variant of Tasks RCU enabled. Mar 2 12:57:26.229071 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 12:57:26.229125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 12:57:26.229145 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 12:57:26.229158 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 12:57:26.229171 kernel: Console: colour dummy device 80x25 Mar 2 12:57:26.229184 kernel: printk: console [ttyS0] enabled Mar 2 12:57:26.229196 kernel: ACPI: Core revision 20230628 Mar 2 12:57:26.229215 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 12:57:26.229228 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 12:57:26.229241 kernel: x2apic enabled Mar 2 12:57:26.229254 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 12:57:26.229267 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 12:57:26.229279 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 12:57:26.229292 kernel: kvm-guest: setup PV IPIs Mar 2 12:57:26.229304 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 12:57:26.229317 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 12:57:26.229333 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 2 12:57:26.229346 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 12:57:26.229359 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 12:57:26.229438 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 12:57:26.229452 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 12:57:26.229464 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 12:57:26.229477 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 12:57:26.229489 kernel: Speculative Store Bypass: Vulnerable Mar 2 12:57:26.229502 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 12:57:26.229521 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 12:57:26.229534 kernel: active return thunk: srso_alias_return_thunk Mar 2 12:57:26.229546 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 12:57:26.229558 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 12:57:26.229570 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 12:57:26.229583 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 12:57:26.229596 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 12:57:26.229609 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 12:57:26.229622 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 12:57:26.229639 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 12:57:26.229652 kernel: Freeing SMP alternatives memory: 32K Mar 2 12:57:26.229664 kernel: pid_max: default: 32768 minimum: 301 Mar 2 12:57:26.229677 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 12:57:26.229689 kernel: landlock: Up and running. Mar 2 12:57:26.229702 kernel: SELinux: Initializing. Mar 2 12:57:26.229715 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:57:26.229728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:57:26.229740 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 12:57:26.229758 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:57:26.229771 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:57:26.229784 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:57:26.229797 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 12:57:26.229810 kernel: signal: max sigframe size: 1776 Mar 2 12:57:26.229822 kernel: rcu: Hierarchical SRCU implementation. Mar 2 12:57:26.229836 kernel: rcu: Max phase no-delay instances is 400. Mar 2 12:57:26.229849 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 12:57:26.229865 kernel: smp: Bringing up secondary CPUs ... Mar 2 12:57:26.229878 kernel: smpboot: x86: Booting SMP configuration: Mar 2 12:57:26.229891 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 12:57:26.229904 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 12:57:26.229917 kernel: smpboot: Max logical packages: 1 Mar 2 12:57:26.229930 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 2 12:57:26.229943 kernel: devtmpfs: initialized Mar 2 12:57:26.229955 kernel: x86/mm: Memory block size: 128MB Mar 2 12:57:26.229968 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 12:57:26.229980 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 12:57:26.229996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 2 12:57:26.230009 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 12:57:26.230023 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 12:57:26.230036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 12:57:26.230049 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 12:57:26.230062 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 12:57:26.230106 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 12:57:26.230121 kernel: audit: initializing netlink subsys (disabled) Mar 2 12:57:26.230158 kernel: audit: type=2000 audit(1772456241.411:1): state=initialized audit_enabled=0 res=1 Mar 2 12:57:26.230190 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 12:57:26.230204 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 12:57:26.230234 kernel: cpuidle: using governor menu Mar 2 12:57:26.230247 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 12:57:26.230260 kernel: dca service started, version 1.12.1 Mar 2 12:57:26.230273 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 12:57:26.230286 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 12:57:26.230299 kernel: PCI: Using configuration type 1 for base access Mar 2 12:57:26.230316 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 12:57:26.230329 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 12:57:26.230343 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 12:57:26.230355 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 12:57:26.230417 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 12:57:26.230431 kernel: ACPI: Added _OSI(Module Device) Mar 2 12:57:26.230444 kernel: ACPI: Added _OSI(Processor Device) Mar 2 12:57:26.230456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 12:57:26.230468 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 12:57:26.230485 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 12:57:26.230498 kernel: ACPI: Interpreter enabled Mar 2 12:57:26.230511 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 12:57:26.230524 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 12:57:26.230538 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 12:57:26.230551 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 12:57:26.230565 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 12:57:26.230578 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 12:57:26.231005 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 12:57:26.231273 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 12:57:26.231709 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 12:57:26.231733 kernel: PCI host bridge to bus 0000:00 Mar 2 12:57:26.232260 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 12:57:26.232593 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 12:57:26.232884 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 12:57:26.233119 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 12:57:26.233313 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 12:57:26.233563 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 2 12:57:26.233754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 12:57:26.234071 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 12:57:26.234548 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 12:57:26.234762 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 2 12:57:26.234971 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 2 12:57:26.235214 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 2 12:57:26.235483 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 2 12:57:26.235757 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 12:57:26.236012 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 12:57:26.236275 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 2 12:57:26.236570 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 2 12:57:26.236773 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 2 12:57:26.237062 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 12:57:26.237312 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 2 12:57:26.237591 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 2 12:57:26.237793 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 2 12:57:26.238063 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 12:57:26.238315 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 2 12:57:26.238579 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 2 12:57:26.238781 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 2 12:57:26.238979 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 2 12:57:26.239317 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 12:57:26.239730 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 12:57:26.240025 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 12:57:26.240274 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 2 12:57:26.240624 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 2 12:57:26.240916 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 12:57:26.241155 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 2 12:57:26.241176 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 12:57:26.241190 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 12:57:26.241203 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 12:57:26.241223 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 12:57:26.241236 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 12:57:26.241247 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 12:57:26.241260 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 12:57:26.241273 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 12:57:26.241286 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 12:57:26.241299 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 12:57:26.241312 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 12:57:26.241325 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 12:57:26.241343 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 12:57:26.241356 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 12:57:26.241423 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 12:57:26.241437 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 12:57:26.241449 kernel: iommu: Default domain type: Translated Mar 2 12:57:26.241467 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 12:57:26.241479 kernel: efivars: Registered efivars operations Mar 2 12:57:26.241492 kernel: PCI: Using ACPI for IRQ routing Mar 2 12:57:26.241505 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 12:57:26.241517 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 12:57:26.241536 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 2 12:57:26.241548 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 2 12:57:26.241561 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 2 12:57:26.241762 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 12:57:26.241964 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 12:57:26.242205 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 12:57:26.242224 kernel: vgaarb: loaded Mar 2 12:57:26.242237 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 12:57:26.242257 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 12:57:26.242269 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 12:57:26.242282 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 12:57:26.242295 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 12:57:26.242308 kernel: pnp: PnP ACPI init Mar 2 12:57:26.242770 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 12:57:26.242792 kernel: pnp: PnP ACPI: found 6 devices Mar 2 12:57:26.242806 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 12:57:26.242826 kernel: NET: Registered PF_INET protocol family Mar 2 12:57:26.242839 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 12:57:26.242852 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 12:57:26.242865 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 12:57:26.242878 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 12:57:26.242892 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 12:57:26.242904 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 12:57:26.242918 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:57:26.242930 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:57:26.242947 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 12:57:26.242960 kernel: NET: Registered PF_XDP protocol family Mar 2 12:57:26.243201 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 2 12:57:26.243488 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 2 12:57:26.243672 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 12:57:26.243851 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 12:57:26.244026 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 12:57:26.244247 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 12:57:26.244631 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 12:57:26.244813 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 2 12:57:26.244831 kernel: PCI: CLS 0 bytes, default 64 Mar 2 12:57:26.244845 kernel: Initialise system trusted keyrings Mar 2 12:57:26.244857 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 12:57:26.244870 kernel: Key type asymmetric registered Mar 2 12:57:26.244882 kernel: Asymmetric key parser 'x509' registered Mar 2 12:57:26.244894 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 12:57:26.244906 kernel: io scheduler mq-deadline registered Mar 2 12:57:26.244924 kernel: io scheduler kyber registered Mar 2 12:57:26.244937 kernel: io scheduler bfq registered Mar 2 12:57:26.244949 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 12:57:26.244962 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 12:57:26.244974 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 12:57:26.244987 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 12:57:26.245000 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 12:57:26.245013 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 12:57:26.245025 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 12:57:26.245042 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 12:57:26.245054 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 12:57:26.245359 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 12:57:26.245432 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 12:57:26.245612 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 12:57:26.245790 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T12:57:25 UTC (1772456245) Mar 2 12:57:26.245964 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 12:57:26.245982 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 12:57:26.246001 kernel: efifb: probing for efifb Mar 2 12:57:26.246013 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 2 12:57:26.246025 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 2 12:57:26.246037 kernel: efifb: scrolling: redraw Mar 2 12:57:26.246049 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 2 12:57:26.246060 kernel: Console: switching to colour frame buffer device 100x37 Mar 2 12:57:26.246071 kernel: fb0: EFI VGA frame buffer device Mar 2 12:57:26.246118 kernel: pstore: Using crash dump compression: deflate Mar 2 12:57:26.246129 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 12:57:26.246147 kernel: NET: Registered PF_INET6 protocol family Mar 2 12:57:26.246159 kernel: Segment Routing with IPv6 Mar 2 12:57:26.246171 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 12:57:26.246183 kernel: NET: Registered PF_PACKET protocol family Mar 2 12:57:26.246195 kernel: Key type dns_resolver registered Mar 2 12:57:26.246207 kernel: IPI shorthand broadcast: enabled Mar 2 12:57:26.246247 kernel: sched_clock: Marking stable (3823063672, 614548115)->(5000917952, -563306165) Mar 2 12:57:26.246264 kernel: registered taskstats version 1 Mar 2 12:57:26.246276 kernel: Loading compiled-in X.509 certificates Mar 2 12:57:26.246292 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 12:57:26.246304 kernel: Key type .fscrypt registered Mar 2 12:57:26.246317 kernel: Key type fscrypt-provisioning registered Mar 2 12:57:26.246331 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 12:57:26.246343 kernel: ima: Allocated hash algorithm: sha1 Mar 2 12:57:26.246359 kernel: ima: No architecture policies found Mar 2 12:57:26.246492 kernel: clk: Disabling unused clocks Mar 2 12:57:26.246504 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 12:57:26.246514 kernel: Write protecting the kernel read-only data: 36864k Mar 2 12:57:26.246530 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 12:57:26.246542 kernel: Run /init as init process Mar 2 12:57:26.246552 kernel: with arguments: Mar 2 12:57:26.246562 kernel: /init Mar 2 12:57:26.246573 kernel: with environment: Mar 2 12:57:26.246584 kernel: HOME=/ Mar 2 12:57:26.246594 kernel: TERM=linux Mar 2 12:57:26.246607 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:57:26.246624 systemd[1]: Detected virtualization kvm. Mar 2 12:57:26.246635 systemd[1]: Detected architecture x86-64. Mar 2 12:57:26.246646 systemd[1]: Running in initrd. Mar 2 12:57:26.246657 systemd[1]: No hostname configured, using default hostname. Mar 2 12:57:26.246667 systemd[1]: Hostname set to . Mar 2 12:57:26.246678 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:57:26.246689 systemd[1]: Queued start job for default target initrd.target. Mar 2 12:57:26.246705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:57:26.246715 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:57:26.246727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 12:57:26.246740 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:57:26.246752 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 12:57:26.246771 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 12:57:26.246785 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 12:57:26.246796 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 12:57:26.246808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:57:26.246820 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:57:26.246831 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:57:26.246843 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:57:26.246858 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:57:26.246869 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:57:26.246880 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:57:26.246890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:57:26.246901 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 12:57:26.246914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 12:57:26.246925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:57:26.246936 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:57:26.246947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:57:26.246961 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:57:26.246974 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 12:57:26.246985 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:57:26.246997 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 12:57:26.247010 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 12:57:26.247021 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:57:26.247033 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:57:26.247044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:57:26.247060 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 12:57:26.247072 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:57:26.247203 systemd-journald[195]: Collecting audit messages is disabled. Mar 2 12:57:26.247230 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 12:57:26.247248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:57:26.247260 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:57:26.247271 systemd-journald[195]: Journal started Mar 2 12:57:26.247298 systemd-journald[195]: Runtime Journal (/run/log/journal/d175e40a8b814778a3735d3291c3d6da) is 6.0M, max 48.3M, 42.2M free. Mar 2 12:57:26.235689 systemd-modules-load[196]: Inserted module 'overlay' Mar 2 12:57:26.265412 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:57:26.274611 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:57:26.276622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:57:26.285427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:57:26.297120 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 12:57:26.307912 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 12:57:26.307952 kernel: Bridge firewalling registered Mar 2 12:57:26.308022 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 2 12:57:26.312489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:57:26.325298 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:57:26.330768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:57:26.339593 dracut-cmdline[218]: dracut-dracut-053 Mar 2 12:57:26.348232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:57:26.359995 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:57:26.362232 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:57:26.384728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:57:26.397706 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:57:26.413705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:57:26.452625 systemd-resolved[253]: Positive Trust Anchors: Mar 2 12:57:26.452660 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:57:26.452687 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:57:26.456043 systemd-resolved[253]: Defaulting to hostname 'linux'. Mar 2 12:57:26.458316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:57:26.463858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:57:26.509535 kernel: SCSI subsystem initialized Mar 2 12:57:26.522607 kernel: Loading iSCSI transport class v2.0-870. Mar 2 12:57:26.538560 kernel: iscsi: registered transport (tcp) Mar 2 12:57:26.588696 kernel: iscsi: registered transport (qla4xxx) Mar 2 12:57:26.588856 kernel: QLogic iSCSI HBA Driver Mar 2 12:57:26.664225 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 12:57:26.675860 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 12:57:26.713672 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 12:57:26.714251 kernel: device-mapper: uevent: version 1.0.3 Mar 2 12:57:26.720202 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 12:57:26.793504 kernel: raid6: avx2x4 gen() 32951 MB/s Mar 2 12:57:26.811476 kernel: raid6: avx2x2 gen() 30007 MB/s Mar 2 12:57:26.831537 kernel: raid6: avx2x1 gen() 23134 MB/s Mar 2 12:57:26.831606 kernel: raid6: using algorithm avx2x4 gen() 32951 MB/s Mar 2 12:57:26.875179 kernel: raid6: .... xor() 4870 MB/s, rmw enabled Mar 2 12:57:26.875592 kernel: raid6: using avx2x2 recovery algorithm Mar 2 12:57:26.899593 kernel: xor: automatically using best checksumming function avx Mar 2 12:57:27.125569 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 12:57:27.161453 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:57:27.186471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:57:27.210880 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 2 12:57:27.217217 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:57:27.247867 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 12:57:27.286135 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Mar 2 12:57:27.333233 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:57:27.356630 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:57:27.452635 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:57:27.484662 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 12:57:27.516253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 12:57:27.525627 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:57:27.541001 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 12:57:27.532253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:57:27.541199 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:57:27.573750 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 12:57:27.593456 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 12:57:27.593972 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 12:57:27.605356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:57:27.622700 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 12:57:27.622738 kernel: GPT:9289727 != 19775487 Mar 2 12:57:27.628546 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 12:57:27.628585 kernel: GPT:9289727 != 19775487 Mar 2 12:57:27.628611 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 12:57:27.628622 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:57:27.635727 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:57:27.648888 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 12:57:27.648911 kernel: libata version 3.00 loaded. Mar 2 12:57:27.648922 kernel: AES CTR mode by8 optimization enabled Mar 2 12:57:27.635796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:57:27.663508 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 12:57:27.664875 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 12:57:27.654194 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:57:27.687852 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (474) Mar 2 12:57:27.687983 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 12:57:27.688532 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 12:57:27.688695 kernel: scsi host0: ahci Mar 2 12:57:27.688866 kernel: scsi host1: ahci Mar 2 12:57:27.659514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:57:27.701843 kernel: scsi host2: ahci Mar 2 12:57:27.702444 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Mar 2 12:57:27.702471 kernel: scsi host3: ahci Mar 2 12:57:27.659616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:57:27.730876 kernel: scsi host4: ahci Mar 2 12:57:27.731456 kernel: scsi host5: ahci Mar 2 12:57:27.731694 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 2 12:57:27.731714 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 2 12:57:27.731741 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 2 12:57:27.731757 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 2 12:57:27.731772 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 2 12:57:27.731788 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 2 12:57:27.681971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:57:27.738886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:57:27.779028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 12:57:27.791162 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:57:27.818068 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 12:57:27.827643 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 12:57:27.850955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 12:57:27.866300 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:57:27.890687 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 12:57:27.901684 disk-uuid[560]: Primary Header is updated. Mar 2 12:57:27.901684 disk-uuid[560]: Secondary Entries is updated. Mar 2 12:57:27.901684 disk-uuid[560]: Secondary Header is updated. Mar 2 12:57:27.915160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:57:27.915223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:57:27.923363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:57:27.928550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:57:27.956278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:57:28.040576 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 12:57:28.040647 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 12:57:28.046561 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 12:57:28.046638 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 12:57:28.062574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 12:57:28.074242 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 12:57:28.074674 kernel: ata3.00: applying bridge limits Mar 2 12:57:28.074696 kernel: ata3.00: configured for UDMA/100 Mar 2 12:57:28.078429 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 12:57:28.086632 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 12:57:28.166311 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 12:57:28.167314 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 12:57:28.181949 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 12:57:28.929474 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:57:28.930038 disk-uuid[561]: The operation has completed successfully. Mar 2 12:57:28.989202 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 12:57:28.989414 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 12:57:29.018691 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 12:57:29.028176 sh[598]: Success Mar 2 12:57:29.044437 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 12:57:29.126176 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 12:57:29.146013 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 12:57:29.171009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 12:57:29.188050 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 12:57:29.188153 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:57:29.188178 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 12:57:29.191163 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 12:57:29.195311 kernel: BTRFS info (device dm-0): using free space tree Mar 2 12:57:29.204887 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 12:57:29.205773 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 12:57:29.220694 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 12:57:29.226774 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 12:57:29.241983 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:57:29.242024 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:57:29.242042 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:57:29.252481 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:57:29.271176 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 12:57:29.277259 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:57:29.285272 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 12:57:29.298604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 12:57:29.372955 ignition[678]: Ignition 2.19.0 Mar 2 12:57:29.372988 ignition[678]: Stage: fetch-offline Mar 2 12:57:29.373036 ignition[678]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:57:29.373054 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:57:29.373277 ignition[678]: parsed url from cmdline: "" Mar 2 12:57:29.373284 ignition[678]: no config URL provided Mar 2 12:57:29.373294 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 12:57:29.373309 ignition[678]: no config at "/usr/lib/ignition/user.ign" Mar 2 12:57:29.373362 ignition[678]: op(1): [started] loading QEMU firmware config module Mar 2 12:57:29.373457 ignition[678]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 12:57:29.386291 ignition[678]: op(1): [finished] loading QEMU firmware config module Mar 2 12:57:29.431036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:57:29.461053 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:57:29.499851 systemd-networkd[786]: lo: Link UP Mar 2 12:57:29.499892 systemd-networkd[786]: lo: Gained carrier Mar 2 12:57:29.502363 systemd-networkd[786]: Enumeration completed Mar 2 12:57:29.502745 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:57:29.503832 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:57:29.503838 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:57:29.504875 systemd[1]: Reached target network.target - Network. Mar 2 12:57:29.505793 systemd-networkd[786]: eth0: Link UP Mar 2 12:57:29.505801 systemd-networkd[786]: eth0: Gained carrier Mar 2 12:57:29.505810 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:57:29.552487 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:57:29.682813 ignition[678]: parsing config with SHA512: 5eddf3fd3b75501668037904eef92bf5ef3f6956391021078efe80d30f3bbe6975bc08849387780866f38974249c810b58eeceedc815bf7081a6380363ac7b06 Mar 2 12:57:29.692630 unknown[678]: fetched base config from "system" Mar 2 12:57:29.692644 unknown[678]: fetched user config from "qemu" Mar 2 12:57:29.693215 ignition[678]: fetch-offline: fetch-offline passed Mar 2 12:57:29.696633 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:57:29.693310 ignition[678]: Ignition finished successfully Mar 2 12:57:29.703354 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 12:57:29.713341 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 12:57:29.940537 ignition[790]: Ignition 2.19.0 Mar 2 12:57:29.940568 ignition[790]: Stage: kargs Mar 2 12:57:29.940740 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:57:29.946343 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 12:57:29.940752 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:57:29.941580 ignition[790]: kargs: kargs passed Mar 2 12:57:29.941630 ignition[790]: Ignition finished successfully Mar 2 12:57:29.985675 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 12:57:30.017881 ignition[798]: Ignition 2.19.0 Mar 2 12:57:30.017916 ignition[798]: Stage: disks Mar 2 12:57:30.018124 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:57:30.021015 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 12:57:30.018138 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:57:30.026768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 12:57:30.018879 ignition[798]: disks: disks passed Mar 2 12:57:30.033810 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 12:57:30.018931 ignition[798]: Ignition finished successfully Mar 2 12:57:30.037897 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:57:30.041213 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:57:30.041346 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:57:30.110433 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 12:57:30.083701 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 12:57:30.113318 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 12:57:30.169124 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 12:57:30.326447 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 12:57:30.327355 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 12:57:30.333153 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 12:57:30.348679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:57:30.363023 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 12:57:30.379819 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Mar 2 12:57:30.379845 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:57:30.379856 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:57:30.379867 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:57:30.378119 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 12:57:30.392728 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:57:30.378165 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 12:57:30.378188 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:57:30.404700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:57:30.410317 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 12:57:30.431654 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 12:57:30.497145 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 12:57:30.505035 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 2 12:57:30.516337 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 12:57:30.527135 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 12:57:30.682681 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 12:57:30.702546 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 12:57:30.710581 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 12:57:30.717848 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 12:57:30.724459 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:57:30.761354 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 12:57:30.799989 ignition[930]: INFO : Ignition 2.19.0 Mar 2 12:57:30.799989 ignition[930]: INFO : Stage: mount Mar 2 12:57:30.804579 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:57:30.804579 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:57:30.811200 ignition[930]: INFO : mount: mount passed Mar 2 12:57:30.813421 ignition[930]: INFO : Ignition finished successfully Mar 2 12:57:30.815954 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 12:57:30.832504 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 12:57:30.840905 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:57:30.858441 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Mar 2 12:57:30.858471 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:57:30.864110 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:57:30.864138 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:57:30.873743 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:57:30.876951 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:57:30.905681 ignition[960]: INFO : Ignition 2.19.0 Mar 2 12:57:30.905681 ignition[960]: INFO : Stage: files Mar 2 12:57:30.905681 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:57:30.905681 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:57:30.921729 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 2 12:57:30.921729 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 12:57:30.921729 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 12:57:30.921729 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 12:57:30.921729 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 12:57:30.921729 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 12:57:30.921729 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:57:30.921729 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 12:57:30.913120 unknown[960]: wrote ssh authorized keys file for user: core Mar 2 12:57:31.002560 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 12:57:31.181267 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:57:31.181267 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:57:31.194280 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 2 12:57:31.510861 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 12:57:31.516691 systemd-networkd[786]: eth0: Gained IPv6LL Mar 2 12:57:33.440709 kernel: hrtimer: interrupt took 2980947 ns Mar 2 12:57:33.772520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 12:57:33.772520 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 12:57:33.782614 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 12:57:33.845853 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:57:33.853035 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:57:33.869797 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 12:57:33.869797 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 12:57:33.869797 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 12:57:33.869797 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:57:33.869797 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:57:33.869797 ignition[960]: INFO : files: files passed Mar 2 12:57:33.869797 ignition[960]: INFO : Ignition finished successfully Mar 2 12:57:33.903044 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 12:57:33.924633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 12:57:33.930835 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 12:57:33.931323 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 12:57:33.931505 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 12:57:33.979011 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 12:57:33.986909 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:57:33.986909 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:57:34.000487 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:57:33.994337 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:57:34.006208 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 12:57:34.032619 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 12:57:34.070873 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 12:57:34.071073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 12:57:34.082057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 12:57:34.088573 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 12:57:34.094773 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 12:57:34.119650 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 12:57:34.136223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:57:34.150701 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 12:57:34.164005 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:57:34.171160 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:57:34.178506 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 12:57:34.184460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 12:57:34.187482 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:57:34.194892 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 12:57:34.201142 systemd[1]: Stopped target basic.target - Basic System. Mar 2 12:57:34.206779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 12:57:34.213315 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:57:34.220699 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 12:57:34.227595 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 12:57:34.233791 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:57:34.241320 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 12:57:34.247882 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 12:57:34.253983 systemd[1]: Stopped target swap.target - Swaps. Mar 2 12:57:34.259006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 12:57:34.261886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:57:34.268690 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:57:34.276724 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:57:34.283974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 12:57:34.286840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:57:34.294672 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 12:57:34.297585 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 12:57:34.304270 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 12:57:34.307470 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:57:34.314489 systemd[1]: Stopped target paths.target - Path Units. Mar 2 12:57:34.319866 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 12:57:34.323170 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:57:34.331470 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 12:57:34.337525 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 12:57:34.343329 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 12:57:34.345922 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:57:34.352075 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 12:57:34.354762 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:57:34.361184 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 12:57:34.364678 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:57:34.372299 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 12:57:34.375215 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 12:57:34.389685 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 12:57:34.399271 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 12:57:34.407316 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 12:57:34.412150 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:57:34.417817 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 12:57:34.417952 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:57:34.436713 ignition[1014]: INFO : Ignition 2.19.0 Mar 2 12:57:34.436713 ignition[1014]: INFO : Stage: umount Mar 2 12:57:34.441203 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:57:34.441203 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:57:34.441203 ignition[1014]: INFO : umount: umount passed Mar 2 12:57:34.441203 ignition[1014]: INFO : Ignition finished successfully Mar 2 12:57:34.454272 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 12:57:34.457813 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 12:57:34.460792 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 12:57:34.467003 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 12:57:34.469799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 12:57:34.477490 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 12:57:34.480260 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 12:57:34.489789 systemd[1]: Stopped target network.target - Network. Mar 2 12:57:34.495240 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 12:57:34.495339 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 12:57:34.503820 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 12:57:34.503903 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 12:57:34.512429 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 12:57:34.512502 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 12:57:34.520769 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 12:57:34.520841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 12:57:34.529721 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 12:57:34.529793 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 12:57:34.538869 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 12:57:34.545549 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 12:57:34.552503 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 2 12:57:34.556249 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 12:57:34.556507 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 12:57:34.567322 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 12:57:34.567580 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 12:57:34.576607 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 12:57:34.576678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:57:34.593685 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 12:57:34.596640 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 12:57:34.596709 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:57:34.604015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 12:57:34.604131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:57:34.607669 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 12:57:34.607727 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 12:57:34.614458 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 12:57:34.614523 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:57:34.622149 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:57:34.642995 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 12:57:34.643299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:57:34.649854 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 12:57:34.649925 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 12:57:34.656627 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 12:57:34.656676 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:57:34.660179 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 12:57:34.660240 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:57:34.881570 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 12:57:34.881652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 12:57:34.891060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:57:34.891190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:57:34.914681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 12:57:34.923061 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 12:57:34.923195 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:57:34.931791 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 12:57:34.931931 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:57:34.938274 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 12:57:34.938335 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:57:34.942127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:57:34.942185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:57:34.950489 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 12:57:34.950742 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 12:57:34.955798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 12:57:34.955932 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 12:57:34.966474 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 12:57:34.993584 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 12:57:35.048592 systemd[1]: Switching root. Mar 2 12:57:35.163989 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 2 12:57:35.164056 systemd-journald[195]: Journal stopped Mar 2 12:57:36.788332 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 12:57:36.788451 kernel: SELinux: policy capability open_perms=1 Mar 2 12:57:36.788472 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 12:57:36.788485 kernel: SELinux: policy capability always_check_network=0 Mar 2 12:57:36.788495 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 12:57:36.788509 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 12:57:36.788520 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 12:57:36.788530 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 12:57:36.788540 kernel: audit: type=1403 audit(1772456255.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 12:57:36.788552 systemd[1]: Successfully loaded SELinux policy in 65.350ms. Mar 2 12:57:36.788574 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.440ms. Mar 2 12:57:36.788585 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:57:36.788600 systemd[1]: Detected virtualization kvm. Mar 2 12:57:36.788610 systemd[1]: Detected architecture x86-64. Mar 2 12:57:36.788621 systemd[1]: Detected first boot. Mar 2 12:57:36.788632 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:57:36.788642 zram_generator::config[1057]: No configuration found. Mar 2 12:57:36.788654 systemd[1]: Populated /etc with preset unit settings. Mar 2 12:57:36.788669 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 12:57:36.788679 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 12:57:36.788693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 12:57:36.788709 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 12:57:36.788719 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 12:57:36.788731 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 12:57:36.788745 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 12:57:36.788756 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 12:57:36.788767 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 12:57:36.788778 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 12:57:36.788791 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 12:57:36.788801 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:57:36.788813 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:57:36.788823 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 12:57:36.788834 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 12:57:36.788846 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 12:57:36.788856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:57:36.788867 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 12:57:36.788878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:57:36.788891 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 12:57:36.788901 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 12:57:36.788912 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 12:57:36.788923 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 12:57:36.788934 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:57:36.788944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:57:36.788955 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:57:36.788966 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:57:36.788980 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 12:57:36.788991 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 12:57:36.789001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:57:36.789012 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:57:36.789022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:57:36.789033 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 12:57:36.789044 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 12:57:36.789055 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 12:57:36.789066 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 12:57:36.789079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:36.789090 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 12:57:36.789139 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 12:57:36.789150 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 12:57:36.789161 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 12:57:36.789171 systemd[1]: Reached target machines.target - Containers. Mar 2 12:57:36.789182 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 12:57:36.789193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:57:36.789207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:57:36.789218 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 12:57:36.789229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:57:36.789241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:57:36.789252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:57:36.789262 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 12:57:36.789273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:57:36.789284 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 12:57:36.789294 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 12:57:36.789307 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 12:57:36.789318 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 12:57:36.789329 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 12:57:36.789340 kernel: fuse: init (API version 7.39) Mar 2 12:57:36.789350 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:57:36.789360 kernel: ACPI: bus type drm_connector registered Mar 2 12:57:36.789418 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:57:36.789431 kernel: loop: module loaded Mar 2 12:57:36.789442 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:57:36.789480 systemd-journald[1141]: Collecting audit messages is disabled. Mar 2 12:57:36.789503 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 12:57:36.789514 systemd-journald[1141]: Journal started Mar 2 12:57:36.789532 systemd-journald[1141]: Runtime Journal (/run/log/journal/d175e40a8b814778a3735d3291c3d6da) is 6.0M, max 48.3M, 42.2M free. Mar 2 12:57:36.093714 systemd[1]: Queued start job for default target multi-user.target. Mar 2 12:57:36.120959 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 12:57:36.121755 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 12:57:36.122466 systemd[1]: systemd-journald.service: Consumed 1.775s CPU time. Mar 2 12:57:36.810330 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:57:36.818007 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 12:57:36.818331 systemd[1]: Stopped verity-setup.service. Mar 2 12:57:36.833620 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:36.846872 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:57:36.858735 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 12:57:36.865281 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 12:57:36.871529 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 12:57:36.880932 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 12:57:36.892692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 12:57:36.900810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 12:57:36.907656 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 12:57:36.917649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:57:36.925998 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 12:57:36.926474 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 12:57:36.934259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:57:36.934717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:57:36.942313 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:57:36.942713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:57:36.960492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:57:36.960814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:57:36.967849 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 12:57:36.968211 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 12:57:36.974958 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:57:36.975334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:57:36.981881 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:57:36.994355 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:57:37.005753 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 12:57:37.036853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:57:37.041948 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:57:37.070513 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 12:57:37.076872 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 12:57:37.081260 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 12:57:37.081300 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:57:37.086749 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 12:57:37.093287 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 12:57:37.099941 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 12:57:37.104339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:57:37.106755 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 12:57:37.113245 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 12:57:37.117631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:57:37.119154 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 12:57:37.124602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:57:37.128905 systemd-journald[1141]: Time spent on flushing to /var/log/journal/d175e40a8b814778a3735d3291c3d6da is 268.463ms for 983 entries. Mar 2 12:57:37.128905 systemd-journald[1141]: System Journal (/var/log/journal/d175e40a8b814778a3735d3291c3d6da) is 8.0M, max 195.6M, 187.6M free. Mar 2 12:57:37.467262 systemd-journald[1141]: Received client request to flush runtime journal. Mar 2 12:57:37.467423 kernel: loop0: detected capacity change from 0 to 228704 Mar 2 12:57:37.127031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:57:37.140546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 12:57:37.169019 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:57:37.178661 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 12:57:37.188613 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 12:57:37.317510 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 12:57:37.339849 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 12:57:37.372221 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 12:57:37.389827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 12:57:37.414619 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 12:57:37.433133 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:57:37.445361 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 12:57:37.472072 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 12:57:37.499422 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 12:57:37.495921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 12:57:37.496780 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 12:57:37.510830 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 2 12:57:37.510880 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 2 12:57:37.524152 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:57:37.567333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 12:57:37.579466 kernel: loop1: detected capacity change from 0 to 140768 Mar 2 12:57:37.610305 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 12:57:37.621706 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:57:37.660703 kernel: loop2: detected capacity change from 0 to 142488 Mar 2 12:57:37.686496 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 2 12:57:37.686546 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 2 12:57:37.697672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:57:37.923726 kernel: loop3: detected capacity change from 0 to 228704 Mar 2 12:57:37.971476 kernel: loop4: detected capacity change from 0 to 140768 Mar 2 12:57:37.996524 kernel: loop5: detected capacity change from 0 to 142488 Mar 2 12:57:38.017889 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 12:57:38.018655 (sd-merge)[1198]: Merged extensions into '/usr'. Mar 2 12:57:38.397658 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 12:57:38.397702 systemd[1]: Reloading... Mar 2 12:57:38.515163 zram_generator::config[1223]: No configuration found. Mar 2 12:57:38.817847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:57:38.835666 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 12:57:38.878815 systemd[1]: Reloading finished in 480 ms. Mar 2 12:57:38.913767 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 12:57:38.918358 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 12:57:38.935698 systemd[1]: Starting ensure-sysext.service... Mar 2 12:57:39.001646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:57:39.039551 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 2 12:57:39.039609 systemd[1]: Reloading... Mar 2 12:57:39.201093 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 12:57:39.215209 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 12:57:39.243572 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 12:57:39.244069 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 2 12:57:39.249333 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 2 12:57:39.335451 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:57:39.335469 systemd-tmpfiles[1262]: Skipping /boot Mar 2 12:57:39.393458 zram_generator::config[1292]: No configuration found. Mar 2 12:57:39.399343 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:57:39.399870 systemd-tmpfiles[1262]: Skipping /boot Mar 2 12:57:39.517932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:57:39.569439 systemd[1]: Reloading finished in 528 ms. Mar 2 12:57:39.589502 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 12:57:39.605903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:57:39.619635 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:57:39.624580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 12:57:39.629306 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 12:57:39.638617 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:57:39.646729 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:57:39.657729 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 12:57:39.677829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:39.678004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:57:39.679556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:57:39.686200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:57:39.691783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:57:39.695305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:57:39.699845 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 12:57:39.703092 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:39.704918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:57:39.705202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:57:39.717342 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:39.717598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:57:39.724781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:57:39.728019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:57:39.728579 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:39.729995 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 12:57:39.735201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:57:39.735459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:57:39.739448 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Mar 2 12:57:39.740196 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:57:39.740531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:57:39.747545 augenrules[1354]: No rules Mar 2 12:57:39.750531 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:57:39.764997 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 12:57:39.769756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:57:39.769964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:57:39.842148 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 12:57:39.881712 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:57:39.889978 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:39.890261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:57:39.898785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:57:39.964594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:57:40.027271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:57:40.113247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:57:40.130176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:57:40.193507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:57:40.210227 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 12:57:40.214245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:57:40.219010 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 12:57:40.224189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:57:40.224611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:57:40.230208 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:57:40.230834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:57:40.235074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:57:40.235502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:57:40.241239 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:57:40.241743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:57:40.276877 systemd[1]: Finished ensure-sysext.service. Mar 2 12:57:40.281046 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 12:57:40.301447 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1371) Mar 2 12:57:40.306827 systemd-resolved[1331]: Positive Trust Anchors: Mar 2 12:57:40.307532 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:57:40.307592 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:57:40.314065 systemd-resolved[1331]: Defaulting to hostname 'linux'. Mar 2 12:57:40.318529 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:57:40.325095 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:57:40.328944 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:57:40.329066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:57:40.342427 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 12:57:40.349999 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 12:57:40.391095 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 12:57:40.431790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:57:40.439478 systemd-networkd[1387]: lo: Link UP Mar 2 12:57:40.439487 systemd-networkd[1387]: lo: Gained carrier Mar 2 12:57:40.441216 systemd-networkd[1387]: Enumeration completed Mar 2 12:57:40.447838 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 12:57:40.457677 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:57:40.476020 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:57:40.476029 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:57:40.478306 systemd-networkd[1387]: eth0: Link UP Mar 2 12:57:40.479285 systemd-networkd[1387]: eth0: Gained carrier Mar 2 12:57:40.479489 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:57:40.479602 systemd[1]: Reached target network.target - Network. Mar 2 12:57:40.494588 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 12:57:40.496505 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:57:40.499005 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 2 12:57:40.499481 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 12:57:40.513325 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 12:57:41.193487 systemd-resolved[1331]: Clock change detected. Flushing caches. Mar 2 12:57:41.193636 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 12:57:41.193771 systemd-timesyncd[1408]: Initial clock synchronization to Mon 2026-03-02 12:57:41.193183 UTC. Mar 2 12:57:41.207814 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 12:57:41.217470 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 12:57:41.225095 kernel: ACPI: button: Power Button [PWRF] Mar 2 12:57:41.277054 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 12:57:41.285136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:57:41.290074 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 12:57:41.294052 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 12:57:41.300919 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 12:57:41.301212 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 12:57:41.571244 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:57:41.571709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:57:41.592160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:57:41.607047 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 12:57:41.702057 kernel: kvm_amd: TSC scaling supported Mar 2 12:57:41.702132 kernel: kvm_amd: Nested Virtualization enabled Mar 2 12:57:41.702147 kernel: kvm_amd: Nested Paging enabled Mar 2 12:57:41.704075 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 12:57:41.709140 kernel: kvm_amd: PMU virtualization is disabled Mar 2 12:57:41.785279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:57:41.976111 kernel: EDAC MC: Ver: 3.0.0 Mar 2 12:57:42.007643 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 12:57:42.025325 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 12:57:42.049091 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:57:42.092797 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 12:57:42.097843 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:57:42.101180 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:57:42.104473 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 12:57:42.108728 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 12:57:42.117280 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 12:57:42.120984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 12:57:42.124692 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 12:57:42.128374 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 12:57:42.128436 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:57:42.131118 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:57:42.135786 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 12:57:42.141979 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 12:57:42.158923 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 12:57:42.163899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 12:57:42.167693 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 12:57:42.170897 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:57:42.173672 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:57:42.173820 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:57:42.173845 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:57:42.175145 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 12:57:42.181341 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 12:57:42.188130 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:57:42.188092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 12:57:42.196120 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 12:57:42.199216 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 12:57:42.204335 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 12:57:42.233114 jq[1438]: false Mar 2 12:57:42.238137 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 12:57:42.248443 extend-filesystems[1439]: Found loop3 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found loop4 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found loop5 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found sr0 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda1 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda2 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda3 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found usr Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda4 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda6 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda7 Mar 2 12:57:42.248443 extend-filesystems[1439]: Found vda9 Mar 2 12:57:42.248443 extend-filesystems[1439]: Checking size of /dev/vda9 Mar 2 12:57:42.393642 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 12:57:42.393684 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1389) Mar 2 12:57:42.394509 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 12:57:42.247641 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 12:57:42.271318 dbus-daemon[1437]: [system] SELinux support is enabled Mar 2 12:57:42.395572 extend-filesystems[1439]: Resized partition /dev/vda9 Mar 2 12:57:42.262145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 12:57:42.400576 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Mar 2 12:57:42.400576 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 12:57:42.400576 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 12:57:42.400576 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 12:57:42.288510 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 12:57:42.404987 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Mar 2 12:57:42.298955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 12:57:42.405311 update_engine[1460]: I20260302 12:57:42.358603 1460 main.cc:92] Flatcar Update Engine starting Mar 2 12:57:42.405311 update_engine[1460]: I20260302 12:57:42.361588 1460 update_check_scheduler.cc:74] Next update check in 9m50s Mar 2 12:57:42.300441 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 12:57:42.320855 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 12:57:42.406094 jq[1461]: true Mar 2 12:57:42.360194 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 12:57:42.365671 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 12:57:42.371251 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 12:57:42.396121 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 12:57:42.396343 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 12:57:42.396833 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 12:57:42.397078 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 12:57:42.420386 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 12:57:42.421818 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 12:57:42.422108 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 12:57:42.431860 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 12:57:42.432394 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 12:57:42.453683 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 12:57:42.453763 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 12:57:42.455434 systemd-logind[1456]: New seat seat0. Mar 2 12:57:42.461223 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 12:57:42.463120 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 12:57:42.466838 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 12:57:42.470271 jq[1470]: true Mar 2 12:57:42.484434 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 12:57:42.491651 tar[1468]: linux-amd64/LICENSE Mar 2 12:57:42.491651 tar[1468]: linux-amd64/helm Mar 2 12:57:42.496565 systemd[1]: Started update-engine.service - Update Engine. Mar 2 12:57:42.541215 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 12:57:42.545496 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 12:57:42.546304 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 12:57:42.551424 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 12:57:42.551614 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 12:57:42.560826 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Mar 2 12:57:42.571454 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 12:57:42.578558 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 12:57:42.588218 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 12:57:42.600847 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 12:57:42.601328 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 12:57:42.893938 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 12:57:42.900551 systemd-networkd[1387]: eth0: Gained IPv6LL Mar 2 12:57:42.927114 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 12:57:42.937636 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 12:57:43.125305 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 12:57:43.143854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:57:43.154122 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 12:57:43.171825 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 12:57:43.195264 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 12:57:43.206910 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 12:57:43.207334 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 12:57:43.229586 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 12:57:43.235791 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 12:57:43.241655 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 12:57:43.242084 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 12:57:43.355893 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 12:57:44.061179 containerd[1473]: time="2026-03-02T12:57:44.060628724Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 12:57:44.127172 containerd[1473]: time="2026-03-02T12:57:44.127098667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:57:44.132611 containerd[1473]: time="2026-03-02T12:57:44.132448908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:57:44.132611 containerd[1473]: time="2026-03-02T12:57:44.132482942Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 12:57:44.132611 containerd[1473]: time="2026-03-02T12:57:44.132498070Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 12:57:44.133205 containerd[1473]: time="2026-03-02T12:57:44.132904819Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 12:57:44.133205 containerd[1473]: time="2026-03-02T12:57:44.132944875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 12:57:44.133205 containerd[1473]: time="2026-03-02T12:57:44.133074316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:57:44.133205 containerd[1473]: time="2026-03-02T12:57:44.133088833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:57:44.133870 containerd[1473]: time="2026-03-02T12:57:44.133326206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:57:44.133870 containerd[1473]: time="2026-03-02T12:57:44.133371191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 12:57:44.133870 containerd[1473]: time="2026-03-02T12:57:44.133385166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:57:44.133870 containerd[1473]: time="2026-03-02T12:57:44.133393913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 12:57:44.133870 containerd[1473]: time="2026-03-02T12:57:44.133556987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:57:44.134069 containerd[1473]: time="2026-03-02T12:57:44.133962655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:57:44.134205 containerd[1473]: time="2026-03-02T12:57:44.134147841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:57:44.134205 containerd[1473]: time="2026-03-02T12:57:44.134186723Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 12:57:44.134485 containerd[1473]: time="2026-03-02T12:57:44.134367480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 12:57:44.134523 containerd[1473]: time="2026-03-02T12:57:44.134496000Z" level=info msg="metadata content store policy set" policy=shared Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.142590355Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.142688398Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.142706381Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.142721139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.142772235Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.142933115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143345475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143566007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143582297Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143593698Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143605260Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143617493Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143629415Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148066 containerd[1473]: time="2026-03-02T12:57:44.143641027Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143653480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143668438Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143699727Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143711658Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143792500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143806425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143818257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143846941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143858803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143870175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143881005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143893418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143904458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148390 containerd[1473]: time="2026-03-02T12:57:44.143917362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.143940746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.143951647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.143962437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.143976623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144050902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144065419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144075157Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144187346Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144222302Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144233092Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144244624Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144253069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144280992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 12:57:44.148623 containerd[1473]: time="2026-03-02T12:57:44.144306900Z" level=info msg="NRI interface is disabled by configuration." Mar 2 12:57:44.359068 containerd[1473]: time="2026-03-02T12:57:44.144316708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.144893425Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.144944841Z" level=info msg="Connect containerd service" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.145410010Z" level=info msg="using legacy CRI server" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.145422443Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.145835835Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.148312961Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.148539634Z" level=info msg="Start subscribing containerd event" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.148698671Z" level=info msg="Start recovering state" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.148842500Z" level=info msg="Start event monitor" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.148880951Z" level=info msg="Start snapshots syncer" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.148922549Z" level=info msg="Start cni network conf syncer for default" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.148930403Z" level=info msg="Start streaming server" Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.150987004Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 12:57:44.359112 containerd[1473]: time="2026-03-02T12:57:44.151174975Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 12:57:44.371879 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 12:57:44.375922 containerd[1473]: time="2026-03-02T12:57:44.372120913Z" level=info msg="containerd successfully booted in 0.313641s" Mar 2 12:57:44.597883 tar[1468]: linux-amd64/README.md Mar 2 12:57:44.701394 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 12:57:46.613908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:57:46.619348 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 12:57:46.623905 systemd[1]: Startup finished in 4.009s (kernel) + 9.539s (initrd) + 10.648s (userspace) = 24.197s. Mar 2 12:57:46.633158 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:57:47.577105 kubelet[1550]: E0302 12:57:47.576735 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:57:47.582977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:57:47.583267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:57:47.583737 systemd[1]: kubelet.service: Consumed 4.016s CPU time. Mar 2 12:57:51.211932 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 12:57:51.225203 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:39056.service - OpenSSH per-connection server daemon (10.0.0.1:39056). Mar 2 12:57:51.276505 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 39056 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:51.278735 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:51.292124 systemd-logind[1456]: New session 1 of user core. Mar 2 12:57:51.293513 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 12:57:51.305498 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 12:57:51.333692 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 12:57:51.397966 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 12:57:51.407910 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 12:57:51.663210 systemd[1568]: Queued start job for default target default.target. Mar 2 12:57:51.673628 systemd[1568]: Created slice app.slice - User Application Slice. Mar 2 12:57:51.673687 systemd[1568]: Reached target paths.target - Paths. Mar 2 12:57:51.673700 systemd[1568]: Reached target timers.target - Timers. Mar 2 12:57:51.675674 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 12:57:51.695538 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 12:57:51.695719 systemd[1568]: Reached target sockets.target - Sockets. Mar 2 12:57:51.695796 systemd[1568]: Reached target basic.target - Basic System. Mar 2 12:57:51.695837 systemd[1568]: Reached target default.target - Main User Target. Mar 2 12:57:51.695874 systemd[1568]: Startup finished in 272ms. Mar 2 12:57:51.696192 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 12:57:51.698473 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 12:57:51.849943 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:39062.service - OpenSSH per-connection server daemon (10.0.0.1:39062). Mar 2 12:57:51.899337 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 39062 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:51.901580 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:51.908431 systemd-logind[1456]: New session 2 of user core. Mar 2 12:57:51.933263 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 12:57:52.002486 sshd[1579]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:52.011333 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:39062.service: Deactivated successfully. Mar 2 12:57:52.013847 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 12:57:52.015935 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Mar 2 12:57:52.027347 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:39078.service - OpenSSH per-connection server daemon (10.0.0.1:39078). Mar 2 12:57:52.028325 systemd-logind[1456]: Removed session 2. Mar 2 12:57:52.078068 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 39078 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:52.081667 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:52.088862 systemd-logind[1456]: New session 3 of user core. Mar 2 12:57:52.105219 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 12:57:52.161159 sshd[1586]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:52.175714 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:39078.service: Deactivated successfully. Mar 2 12:57:52.178158 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 12:57:52.180091 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Mar 2 12:57:52.192512 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:38202.service - OpenSSH per-connection server daemon (10.0.0.1:38202). Mar 2 12:57:52.193806 systemd-logind[1456]: Removed session 3. Mar 2 12:57:52.223421 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 38202 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:52.226435 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:52.231944 systemd-logind[1456]: New session 4 of user core. Mar 2 12:57:52.246947 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 12:57:52.316909 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:52.335679 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:38202.service: Deactivated successfully. Mar 2 12:57:52.338192 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 12:57:52.340110 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Mar 2 12:57:52.356348 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:38216.service - OpenSSH per-connection server daemon (10.0.0.1:38216). Mar 2 12:57:52.357352 systemd-logind[1456]: Removed session 4. Mar 2 12:57:52.399494 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 38216 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:52.401441 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:52.412509 systemd-logind[1456]: New session 5 of user core. Mar 2 12:57:52.422187 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 12:57:52.489899 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 12:57:52.490576 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:57:52.507112 sudo[1603]: pam_unix(sudo:session): session closed for user root Mar 2 12:57:52.510198 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:52.521707 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:38216.service: Deactivated successfully. Mar 2 12:57:52.524145 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 12:57:52.525985 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Mar 2 12:57:52.540352 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:38230.service - OpenSSH per-connection server daemon (10.0.0.1:38230). Mar 2 12:57:52.541523 systemd-logind[1456]: Removed session 5. Mar 2 12:57:52.575066 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 38230 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:52.577073 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:52.582964 systemd-logind[1456]: New session 6 of user core. Mar 2 12:57:52.595431 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 12:57:52.664113 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 12:57:52.664656 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:57:52.670374 sudo[1612]: pam_unix(sudo:session): session closed for user root Mar 2 12:57:52.677510 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 12:57:52.677904 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:57:52.706425 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 12:57:52.710187 auditctl[1615]: No rules Mar 2 12:57:52.710605 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:57:52.710881 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 12:57:52.714182 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:57:52.756094 augenrules[1633]: No rules Mar 2 12:57:52.757704 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:57:52.759102 sudo[1611]: pam_unix(sudo:session): session closed for user root Mar 2 12:57:52.761407 sshd[1608]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:52.771528 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:38230.service: Deactivated successfully. Mar 2 12:57:52.773318 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 12:57:52.774845 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Mar 2 12:57:52.776197 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:38234.service - OpenSSH per-connection server daemon (10.0.0.1:38234). Mar 2 12:57:52.777258 systemd-logind[1456]: Removed session 6. Mar 2 12:57:52.825845 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 38234 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:57:52.827297 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:52.833323 systemd-logind[1456]: New session 7 of user core. Mar 2 12:57:52.856548 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 12:57:52.915970 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 12:57:52.916418 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:57:54.137653 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 12:57:54.139205 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 12:57:55.167907 dockerd[1662]: time="2026-03-02T12:57:55.124980265Z" level=info msg="Starting up" Mar 2 12:57:55.481135 dockerd[1662]: time="2026-03-02T12:57:55.480807673Z" level=info msg="Loading containers: start." Mar 2 12:57:55.805093 kernel: Initializing XFRM netlink socket Mar 2 12:57:55.928321 systemd-networkd[1387]: docker0: Link UP Mar 2 12:57:55.952809 dockerd[1662]: time="2026-03-02T12:57:55.952514038Z" level=info msg="Loading containers: done." Mar 2 12:57:55.983322 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3838261691-merged.mount: Deactivated successfully. Mar 2 12:57:55.987439 dockerd[1662]: time="2026-03-02T12:57:55.987289463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 12:57:55.987677 dockerd[1662]: time="2026-03-02T12:57:55.987595444Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 12:57:55.987975 dockerd[1662]: time="2026-03-02T12:57:55.987904291Z" level=info msg="Daemon has completed initialization" Mar 2 12:57:56.056070 dockerd[1662]: time="2026-03-02T12:57:56.055883911Z" level=info msg="API listen on /run/docker.sock" Mar 2 12:57:56.056270 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 12:57:57.128478 containerd[1473]: time="2026-03-02T12:57:57.128149181Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 12:57:57.834665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 12:57:58.010824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:57:58.087956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930656692.mount: Deactivated successfully. Mar 2 12:57:58.624271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:57:58.647602 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:57:58.766920 kubelet[1854]: E0302 12:57:58.766832 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:57:58.773155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:57:58.773390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:58:00.229447 containerd[1473]: time="2026-03-02T12:58:00.228937713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:00.230631 containerd[1473]: time="2026-03-02T12:58:00.229061755Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 2 12:58:00.232600 containerd[1473]: time="2026-03-02T12:58:00.232296356Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:00.242624 containerd[1473]: time="2026-03-02T12:58:00.242566423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:00.244699 containerd[1473]: time="2026-03-02T12:58:00.244623825Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 3.116233494s" Mar 2 12:58:00.244699 containerd[1473]: time="2026-03-02T12:58:00.244704757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 2 12:58:00.248909 containerd[1473]: time="2026-03-02T12:58:00.248817865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 12:58:01.817639 containerd[1473]: time="2026-03-02T12:58:01.817399530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:01.819521 containerd[1473]: time="2026-03-02T12:58:01.819440103Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 2 12:58:01.824623 containerd[1473]: time="2026-03-02T12:58:01.824438798Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:01.829937 containerd[1473]: time="2026-03-02T12:58:01.829829594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:01.831612 containerd[1473]: time="2026-03-02T12:58:01.831523515Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.58262554s" Mar 2 12:58:01.831612 containerd[1473]: time="2026-03-02T12:58:01.831597022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 2 12:58:01.835162 containerd[1473]: time="2026-03-02T12:58:01.835113983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 12:58:08.048296 containerd[1473]: time="2026-03-02T12:58:08.047948438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:08.051703 containerd[1473]: time="2026-03-02T12:58:08.048740870Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 2 12:58:08.051703 containerd[1473]: time="2026-03-02T12:58:08.050756342Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:08.055445 containerd[1473]: time="2026-03-02T12:58:08.055362090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:08.056811 containerd[1473]: time="2026-03-02T12:58:08.056632325Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 6.221399049s" Mar 2 12:58:08.057059 containerd[1473]: time="2026-03-02T12:58:08.056961299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 2 12:58:08.061829 containerd[1473]: time="2026-03-02T12:58:08.061734769Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 12:58:09.028960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 12:58:09.047283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:58:10.382720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:58:10.437669 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:58:10.776845 kubelet[1907]: E0302 12:58:10.776123 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:58:10.782546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:58:10.782829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:58:10.783541 systemd[1]: kubelet.service: Consumed 1.373s CPU time. Mar 2 12:58:10.960574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207941076.mount: Deactivated successfully. Mar 2 12:58:12.093126 containerd[1473]: time="2026-03-02T12:58:12.092812912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:12.095203 containerd[1473]: time="2026-03-02T12:58:12.094294259Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 2 12:58:12.095859 containerd[1473]: time="2026-03-02T12:58:12.095700815Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:12.099362 containerd[1473]: time="2026-03-02T12:58:12.099300900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:12.100503 containerd[1473]: time="2026-03-02T12:58:12.100435825Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 4.038604907s" Mar 2 12:58:12.100503 containerd[1473]: time="2026-03-02T12:58:12.100495898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 2 12:58:12.104193 containerd[1473]: time="2026-03-02T12:58:12.104151947Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 12:58:12.570096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520806118.mount: Deactivated successfully. Mar 2 12:58:14.144594 containerd[1473]: time="2026-03-02T12:58:14.144428797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:14.145927 containerd[1473]: time="2026-03-02T12:58:14.145044270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 2 12:58:14.147070 containerd[1473]: time="2026-03-02T12:58:14.146940113Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:14.153073 containerd[1473]: time="2026-03-02T12:58:14.152841544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:14.155419 containerd[1473]: time="2026-03-02T12:58:14.155243423Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.05105604s" Mar 2 12:58:14.155419 containerd[1473]: time="2026-03-02T12:58:14.155382952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 2 12:58:14.160833 containerd[1473]: time="2026-03-02T12:58:14.160574497Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 12:58:15.388322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362628154.mount: Deactivated successfully. Mar 2 12:58:15.392655 containerd[1473]: time="2026-03-02T12:58:15.390434691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:15.403441 containerd[1473]: time="2026-03-02T12:58:15.397693483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 12:58:15.440467 containerd[1473]: time="2026-03-02T12:58:15.440108960Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:15.456921 containerd[1473]: time="2026-03-02T12:58:15.456630119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:15.458330 containerd[1473]: time="2026-03-02T12:58:15.457283724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.296683148s" Mar 2 12:58:15.458330 containerd[1473]: time="2026-03-02T12:58:15.457311676Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 2 12:58:15.460672 containerd[1473]: time="2026-03-02T12:58:15.460318938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 12:58:15.913734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936917304.mount: Deactivated successfully. Mar 2 12:58:17.107590 containerd[1473]: time="2026-03-02T12:58:17.107467543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:17.109237 containerd[1473]: time="2026-03-02T12:58:17.108176553Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 2 12:58:17.109725 containerd[1473]: time="2026-03-02T12:58:17.109684453Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:17.114474 containerd[1473]: time="2026-03-02T12:58:17.114396187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:17.115765 containerd[1473]: time="2026-03-02T12:58:17.115702217Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.655347513s" Mar 2 12:58:17.115765 containerd[1473]: time="2026-03-02T12:58:17.115751789Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 2 12:58:19.616592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:58:19.616978 systemd[1]: kubelet.service: Consumed 1.373s CPU time. Mar 2 12:58:19.627424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:58:19.660874 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit session-7.scope)... Mar 2 12:58:19.660889 systemd[1]: Reloading... Mar 2 12:58:19.749138 zram_generator::config[2110]: No configuration found. Mar 2 12:58:19.862620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:58:19.945153 systemd[1]: Reloading finished in 283 ms. Mar 2 12:58:20.011412 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:58:20.015614 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 12:58:20.015919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:58:20.017949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:58:20.206234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:58:20.213769 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:58:20.318114 kubelet[2159]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:58:20.318114 kubelet[2159]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:58:20.318114 kubelet[2159]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:58:20.319221 kubelet[2159]: I0302 12:58:20.318943 2159 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:58:20.866154 kubelet[2159]: I0302 12:58:20.865880 2159 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 12:58:20.866154 kubelet[2159]: I0302 12:58:20.866121 2159 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:58:20.866965 kubelet[2159]: I0302 12:58:20.866897 2159 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:58:20.928460 kubelet[2159]: E0302 12:58:20.928228 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:58:20.933191 kubelet[2159]: I0302 12:58:20.932804 2159 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:58:20.948819 kubelet[2159]: E0302 12:58:20.948672 2159 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:58:20.948819 kubelet[2159]: I0302 12:58:20.948725 2159 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 12:58:20.958503 kubelet[2159]: I0302 12:58:20.958438 2159 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 12:58:20.960230 kubelet[2159]: I0302 12:58:20.960147 2159 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:58:20.960736 kubelet[2159]: I0302 12:58:20.960200 2159 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:58:20.960977 kubelet[2159]: I0302 12:58:20.960764 2159 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:58:20.960977 kubelet[2159]: I0302 12:58:20.960807 2159 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 12:58:20.961368 kubelet[2159]: I0302 12:58:20.961264 2159 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:58:20.969762 kubelet[2159]: I0302 12:58:20.969697 2159 kubelet.go:480] "Attempting to sync node with API server" Mar 2 12:58:20.969866 kubelet[2159]: I0302 12:58:20.969821 2159 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:58:20.969992 kubelet[2159]: I0302 12:58:20.969926 2159 kubelet.go:386] "Adding apiserver pod source" Mar 2 12:58:20.971959 kubelet[2159]: I0302 12:58:20.971938 2159 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:58:20.974382 kubelet[2159]: E0302 12:58:20.974276 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:58:20.974382 kubelet[2159]: E0302 12:58:20.974284 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:58:20.977625 kubelet[2159]: I0302 12:58:20.977552 2159 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:58:20.978504 kubelet[2159]: I0302 12:58:20.978451 2159 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:58:20.979836 kubelet[2159]: W0302 12:58:20.979748 2159 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 12:58:20.990139 kubelet[2159]: I0302 12:58:20.990073 2159 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 12:58:20.990372 kubelet[2159]: I0302 12:58:20.990277 2159 server.go:1289] "Started kubelet" Mar 2 12:58:20.991303 kubelet[2159]: I0302 12:58:20.991209 2159 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:58:20.992612 kubelet[2159]: I0302 12:58:20.992511 2159 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:58:20.992897 kubelet[2159]: I0302 12:58:20.992849 2159 server.go:317] "Adding debug handlers to kubelet server" Mar 2 12:58:20.993968 kubelet[2159]: I0302 12:58:20.991232 2159 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:58:20.994702 kubelet[2159]: I0302 12:58:20.994348 2159 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:58:20.996803 kubelet[2159]: E0302 12:58:20.996545 2159 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:58:20.997117 kubelet[2159]: I0302 12:58:20.997046 2159 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:58:20.997436 kubelet[2159]: E0302 12:58:20.997360 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:58:20.997876 kubelet[2159]: I0302 12:58:20.997861 2159 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 12:58:20.998950 kubelet[2159]: I0302 12:58:20.998935 2159 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 12:58:20.999272 kubelet[2159]: I0302 12:58:20.999259 2159 reconciler.go:26] "Reconciler: start to sync state" Mar 2 12:58:20.999689 kubelet[2159]: E0302 12:58:20.999667 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:58:20.999744 kubelet[2159]: E0302 12:58:20.995181 2159 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899079d686c6c4c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:58:20.990196812 +0000 UTC m=+0.749608422,LastTimestamp:2026-03-02 12:58:20.990196812 +0000 UTC m=+0.749608422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:58:21.000263 kubelet[2159]: E0302 12:58:21.000205 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Mar 2 12:58:21.001520 kubelet[2159]: I0302 12:58:21.001465 2159 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:58:21.001629 kubelet[2159]: I0302 12:58:21.001600 2159 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:58:21.003271 kubelet[2159]: I0302 12:58:21.003237 2159 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:58:21.035518 kubelet[2159]: I0302 12:58:21.035357 2159 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 12:58:21.037501 kubelet[2159]: I0302 12:58:21.037480 2159 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 12:58:21.037694 kubelet[2159]: I0302 12:58:21.037681 2159 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 12:58:21.037911 kubelet[2159]: I0302 12:58:21.037895 2159 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:58:21.037962 kubelet[2159]: I0302 12:58:21.037954 2159 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 12:58:21.038361 kubelet[2159]: E0302 12:58:21.038091 2159 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:58:21.039686 kubelet[2159]: E0302 12:58:21.039610 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:58:21.040204 kubelet[2159]: I0302 12:58:21.040137 2159 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:58:21.040204 kubelet[2159]: I0302 12:58:21.040189 2159 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:58:21.040293 kubelet[2159]: I0302 12:58:21.040242 2159 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:58:21.044608 kubelet[2159]: I0302 12:58:21.044561 2159 policy_none.go:49] "None policy: Start" Mar 2 12:58:21.044657 kubelet[2159]: I0302 12:58:21.044646 2159 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 12:58:21.044796 kubelet[2159]: I0302 12:58:21.044740 2159 state_mem.go:35] "Initializing new in-memory state store" Mar 2 12:58:21.054669 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 12:58:21.071377 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 12:58:21.076064 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 12:58:21.090617 kubelet[2159]: E0302 12:58:21.090348 2159 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:58:21.090617 kubelet[2159]: I0302 12:58:21.090612 2159 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:58:21.090752 kubelet[2159]: I0302 12:58:21.090647 2159 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:58:21.091198 kubelet[2159]: I0302 12:58:21.091151 2159 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:58:21.092336 kubelet[2159]: E0302 12:58:21.092141 2159 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:58:21.092336 kubelet[2159]: E0302 12:58:21.092257 2159 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 12:58:21.227695 kubelet[2159]: I0302 12:58:21.224982 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:58:21.227695 kubelet[2159]: E0302 12:58:21.225660 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Mar 2 12:58:21.236933 kubelet[2159]: E0302 12:58:21.235655 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Mar 2 12:58:21.237942 systemd[1]: Created slice kubepods-burstable-podd16d84641d90158e70d6ed2e75742706.slice - libcontainer container kubepods-burstable-podd16d84641d90158e70d6ed2e75742706.slice. Mar 2 12:58:21.249637 kubelet[2159]: E0302 12:58:21.249542 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:21.259084 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 2 12:58:21.262143 kubelet[2159]: E0302 12:58:21.261948 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:21.306925 kubelet[2159]: I0302 12:58:21.306657 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d16d84641d90158e70d6ed2e75742706-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d16d84641d90158e70d6ed2e75742706\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:21.307305 kubelet[2159]: I0302 12:58:21.306858 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d16d84641d90158e70d6ed2e75742706-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d16d84641d90158e70d6ed2e75742706\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:21.307305 kubelet[2159]: I0302 12:58:21.307147 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d16d84641d90158e70d6ed2e75742706-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d16d84641d90158e70d6ed2e75742706\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:21.310499 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 2 12:58:21.313864 kubelet[2159]: E0302 12:58:21.313809 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:21.413194 kubelet[2159]: I0302 12:58:21.412674 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:21.413194 kubelet[2159]: I0302 12:58:21.413248 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:21.416544 kubelet[2159]: I0302 12:58:21.413932 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:21.416544 kubelet[2159]: I0302 12:58:21.413992 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:21.416544 kubelet[2159]: I0302 12:58:21.414104 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:58:21.416544 kubelet[2159]: I0302 12:58:21.414121 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:21.434583 kubelet[2159]: I0302 12:58:21.434417 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:58:21.435401 kubelet[2159]: E0302 12:58:21.435297 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Mar 2 12:58:21.562219 kubelet[2159]: E0302 12:58:21.559610 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:21.564581 kubelet[2159]: E0302 12:58:21.564551 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:21.579874 containerd[1473]: time="2026-03-02T12:58:21.578980280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:21.579874 containerd[1473]: time="2026-03-02T12:58:21.579144807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d16d84641d90158e70d6ed2e75742706,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:21.632171 kubelet[2159]: E0302 12:58:21.630716 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:21.636028 containerd[1473]: time="2026-03-02T12:58:21.635909952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:21.637597 kubelet[2159]: E0302 12:58:21.637397 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Mar 2 12:58:21.846141 kubelet[2159]: E0302 12:58:21.845547 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:58:21.847103 kubelet[2159]: I0302 12:58:21.847061 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:58:21.847739 kubelet[2159]: E0302 12:58:21.847636 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Mar 2 12:58:21.890797 kubelet[2159]: E0302 12:58:21.890644 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:58:21.919491 kubelet[2159]: E0302 12:58:21.919384 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:58:22.086356 kubelet[2159]: E0302 12:58:22.086239 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:58:22.228341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781640282.mount: Deactivated successfully. Mar 2 12:58:22.234904 containerd[1473]: time="2026-03-02T12:58:22.234838291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:58:22.236275 containerd[1473]: time="2026-03-02T12:58:22.235702456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 12:58:22.239174 containerd[1473]: time="2026-03-02T12:58:22.239093501Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:58:22.240498 containerd[1473]: time="2026-03-02T12:58:22.240364830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:58:22.241365 containerd[1473]: time="2026-03-02T12:58:22.241231075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:58:22.242269 containerd[1473]: time="2026-03-02T12:58:22.242217430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:58:22.243438 containerd[1473]: time="2026-03-02T12:58:22.243318153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:58:22.244460 containerd[1473]: time="2026-03-02T12:58:22.244419795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:58:22.246657 containerd[1473]: time="2026-03-02T12:58:22.246594622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.593528ms" Mar 2 12:58:22.247495 containerd[1473]: time="2026-03-02T12:58:22.247429847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 666.430811ms" Mar 2 12:58:22.249332 containerd[1473]: time="2026-03-02T12:58:22.249287582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.099151ms" Mar 2 12:58:22.532912 kubelet[2159]: E0302 12:58:22.530833 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Mar 2 12:58:22.654652 kubelet[2159]: I0302 12:58:22.653421 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:58:22.654652 kubelet[2159]: E0302 12:58:22.653921 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Mar 2 12:58:22.801969 containerd[1473]: time="2026-03-02T12:58:22.801472847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:22.801969 containerd[1473]: time="2026-03-02T12:58:22.801556062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:22.801969 containerd[1473]: time="2026-03-02T12:58:22.801601046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:22.802868 containerd[1473]: time="2026-03-02T12:58:22.802095108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:22.810661 containerd[1473]: time="2026-03-02T12:58:22.807468160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:22.810661 containerd[1473]: time="2026-03-02T12:58:22.807522871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:22.810661 containerd[1473]: time="2026-03-02T12:58:22.807538510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:22.810661 containerd[1473]: time="2026-03-02T12:58:22.807621946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:22.829675 containerd[1473]: time="2026-03-02T12:58:22.829492884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:22.832537 containerd[1473]: time="2026-03-02T12:58:22.832363137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:22.832537 containerd[1473]: time="2026-03-02T12:58:22.832428650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:22.833080 containerd[1473]: time="2026-03-02T12:58:22.832892605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:22.927084 systemd[1]: Started cri-containerd-0c8b7a216fe3d462bd88d2460420549aac8c52b6d21c6b631ee8ba2fe83d25cf.scope - libcontainer container 0c8b7a216fe3d462bd88d2460420549aac8c52b6d21c6b631ee8ba2fe83d25cf. Mar 2 12:58:22.936680 systemd[1]: Started cri-containerd-becb1ee00d69515d82b2a27878bb0a4ac2f3decf08402ad84497640fa85dece1.scope - libcontainer container becb1ee00d69515d82b2a27878bb0a4ac2f3decf08402ad84497640fa85dece1. Mar 2 12:58:22.941766 kubelet[2159]: E0302 12:58:22.941719 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:58:22.955166 systemd[1]: Started cri-containerd-4fa8d6a282dff89142411626dc0b0c831dfecc02c0a1d9b628916edbc9ee3c96.scope - libcontainer container 4fa8d6a282dff89142411626dc0b0c831dfecc02c0a1d9b628916edbc9ee3c96. Mar 2 12:58:23.049284 containerd[1473]: time="2026-03-02T12:58:23.048887951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d16d84641d90158e70d6ed2e75742706,Namespace:kube-system,Attempt:0,} returns sandbox id \"becb1ee00d69515d82b2a27878bb0a4ac2f3decf08402ad84497640fa85dece1\"" Mar 2 12:58:23.058171 kubelet[2159]: E0302 12:58:23.057911 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:23.085329 containerd[1473]: time="2026-03-02T12:58:23.085250018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c8b7a216fe3d462bd88d2460420549aac8c52b6d21c6b631ee8ba2fe83d25cf\"" Mar 2 12:58:23.088216 kubelet[2159]: E0302 12:58:23.088066 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:23.089355 containerd[1473]: time="2026-03-02T12:58:23.089294613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fa8d6a282dff89142411626dc0b0c831dfecc02c0a1d9b628916edbc9ee3c96\"" Mar 2 12:58:23.090205 kubelet[2159]: E0302 12:58:23.090180 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:23.454351 containerd[1473]: time="2026-03-02T12:58:23.453448607Z" level=info msg="CreateContainer within sandbox \"becb1ee00d69515d82b2a27878bb0a4ac2f3decf08402ad84497640fa85dece1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 12:58:23.457748 containerd[1473]: time="2026-03-02T12:58:23.457585433Z" level=info msg="CreateContainer within sandbox \"0c8b7a216fe3d462bd88d2460420549aac8c52b6d21c6b631ee8ba2fe83d25cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 12:58:23.463256 containerd[1473]: time="2026-03-02T12:58:23.463207370Z" level=info msg="CreateContainer within sandbox \"4fa8d6a282dff89142411626dc0b0c831dfecc02c0a1d9b628916edbc9ee3c96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 12:58:23.477819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773704998.mount: Deactivated successfully. Mar 2 12:58:23.490125 containerd[1473]: time="2026-03-02T12:58:23.490067253Z" level=info msg="CreateContainer within sandbox \"0c8b7a216fe3d462bd88d2460420549aac8c52b6d21c6b631ee8ba2fe83d25cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d901e07766bbc6a0baa9e6ffd540729297ce36dd54bb42e2ddb03c0e6048c92e\"" Mar 2 12:58:23.492482 containerd[1473]: time="2026-03-02T12:58:23.492366874Z" level=info msg="StartContainer for \"d901e07766bbc6a0baa9e6ffd540729297ce36dd54bb42e2ddb03c0e6048c92e\"" Mar 2 12:58:23.494915 containerd[1473]: time="2026-03-02T12:58:23.494888841Z" level=info msg="CreateContainer within sandbox \"becb1ee00d69515d82b2a27878bb0a4ac2f3decf08402ad84497640fa85dece1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ff41cf8c4867aed98c43e28202fd6003da6cf0d6b27612d075e999cd0f56d77\"" Mar 2 12:58:23.495723 containerd[1473]: time="2026-03-02T12:58:23.495701347Z" level=info msg="StartContainer for \"2ff41cf8c4867aed98c43e28202fd6003da6cf0d6b27612d075e999cd0f56d77\"" Mar 2 12:58:23.502362 containerd[1473]: time="2026-03-02T12:58:23.502336853Z" level=info msg="CreateContainer within sandbox \"4fa8d6a282dff89142411626dc0b0c831dfecc02c0a1d9b628916edbc9ee3c96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a519c24967dfee277b463c4d5494ad41269ba67a9fd61f55302b75af413b92c2\"" Mar 2 12:58:23.503285 containerd[1473]: time="2026-03-02T12:58:23.503200258Z" level=info msg="StartContainer for \"a519c24967dfee277b463c4d5494ad41269ba67a9fd61f55302b75af413b92c2\"" Mar 2 12:58:23.558280 systemd[1]: Started cri-containerd-2ff41cf8c4867aed98c43e28202fd6003da6cf0d6b27612d075e999cd0f56d77.scope - libcontainer container 2ff41cf8c4867aed98c43e28202fd6003da6cf0d6b27612d075e999cd0f56d77. Mar 2 12:58:23.563667 systemd[1]: Started cri-containerd-a519c24967dfee277b463c4d5494ad41269ba67a9fd61f55302b75af413b92c2.scope - libcontainer container a519c24967dfee277b463c4d5494ad41269ba67a9fd61f55302b75af413b92c2. Mar 2 12:58:23.565762 systemd[1]: Started cri-containerd-d901e07766bbc6a0baa9e6ffd540729297ce36dd54bb42e2ddb03c0e6048c92e.scope - libcontainer container d901e07766bbc6a0baa9e6ffd540729297ce36dd54bb42e2ddb03c0e6048c92e. Mar 2 12:58:23.758478 containerd[1473]: time="2026-03-02T12:58:23.743304758Z" level=info msg="StartContainer for \"2ff41cf8c4867aed98c43e28202fd6003da6cf0d6b27612d075e999cd0f56d77\" returns successfully" Mar 2 12:58:23.848622 containerd[1473]: time="2026-03-02T12:58:23.848519526Z" level=info msg="StartContainer for \"a519c24967dfee277b463c4d5494ad41269ba67a9fd61f55302b75af413b92c2\" returns successfully" Mar 2 12:58:23.871200 containerd[1473]: time="2026-03-02T12:58:23.870981987Z" level=info msg="StartContainer for \"d901e07766bbc6a0baa9e6ffd540729297ce36dd54bb42e2ddb03c0e6048c92e\" returns successfully" Mar 2 12:58:24.158528 kubelet[2159]: E0302 12:58:24.158373 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:24.159414 kubelet[2159]: E0302 12:58:24.158700 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:24.163299 kubelet[2159]: E0302 12:58:24.163223 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:24.165071 kubelet[2159]: E0302 12:58:24.163420 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:24.166813 kubelet[2159]: E0302 12:58:24.166709 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:24.167114 kubelet[2159]: E0302 12:58:24.166973 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:24.256235 kubelet[2159]: I0302 12:58:24.256178 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:58:25.202907 kubelet[2159]: E0302 12:58:25.202701 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:25.202907 kubelet[2159]: E0302 12:58:25.202742 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:25.204083 kubelet[2159]: E0302 12:58:25.203281 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:25.204083 kubelet[2159]: E0302 12:58:25.203299 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:25.744500 kubelet[2159]: E0302 12:58:25.744396 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:58:25.744676 kubelet[2159]: E0302 12:58:25.744596 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:26.094113 kubelet[2159]: E0302 12:58:26.093668 2159 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 12:58:26.165617 kubelet[2159]: I0302 12:58:26.165558 2159 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:58:26.181690 kubelet[2159]: I0302 12:58:26.180664 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:26.199981 kubelet[2159]: I0302 12:58:26.199829 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:26.233583 kubelet[2159]: E0302 12:58:26.228305 2159 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:26.233583 kubelet[2159]: I0302 12:58:26.228352 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:26.233583 kubelet[2159]: E0302 12:58:26.228574 2159 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:26.233583 kubelet[2159]: E0302 12:58:26.228846 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:26.233583 kubelet[2159]: E0302 12:58:26.232309 2159 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:26.233583 kubelet[2159]: I0302 12:58:26.232327 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:58:26.234427 kubelet[2159]: E0302 12:58:26.234233 2159 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 12:58:26.979842 kubelet[2159]: I0302 12:58:26.979316 2159 apiserver.go:52] "Watching apiserver" Mar 2 12:58:27.503853 update_engine[1460]: I20260302 12:58:27.454514 1460 update_attempter.cc:509] Updating boot flags... Mar 2 12:58:27.832583 kubelet[2159]: I0302 12:58:27.804523 2159 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 12:58:28.148857 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2457) Mar 2 12:58:28.240104 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2457) Mar 2 12:58:28.439160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2457) Mar 2 12:58:28.892387 kubelet[2159]: I0302 12:58:28.887931 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:29.037054 kubelet[2159]: E0302 12:58:29.036449 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:30.455286 kubelet[2159]: E0302 12:58:30.454764 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:31.464810 kubelet[2159]: I0302 12:58:31.464189 2159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.464063548 podStartE2EDuration="3.464063548s" podCreationTimestamp="2026-03-02 12:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:58:31.46386003 +0000 UTC m=+11.223271641" watchObservedRunningTime="2026-03-02 12:58:31.464063548 +0000 UTC m=+11.223475169" Mar 2 12:58:31.794574 kubelet[2159]: I0302 12:58:31.792656 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:58:31.920115 kubelet[2159]: E0302 12:58:31.919500 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:32.509531 kubelet[2159]: E0302 12:58:32.508644 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:35.753354 kubelet[2159]: I0302 12:58:35.753102 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:35.763388 kubelet[2159]: E0302 12:58:35.763272 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:35.766860 kubelet[2159]: I0302 12:58:35.766311 2159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.766214713 podStartE2EDuration="4.766214713s" podCreationTimestamp="2026-03-02 12:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:58:35.766112441 +0000 UTC m=+15.525524051" watchObservedRunningTime="2026-03-02 12:58:35.766214713 +0000 UTC m=+15.525626324" Mar 2 12:58:35.795175 kubelet[2159]: I0302 12:58:35.794708 2159 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.79459754 podStartE2EDuration="794.59754ms" podCreationTimestamp="2026-03-02 12:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:58:35.793758394 +0000 UTC m=+15.553170005" watchObservedRunningTime="2026-03-02 12:58:35.79459754 +0000 UTC m=+15.554009161" Mar 2 12:58:36.673537 kubelet[2159]: E0302 12:58:36.673473 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:36.704447 systemd[1]: Reloading requested from client PID 2470 ('systemctl') (unit session-7.scope)... Mar 2 12:58:36.704498 systemd[1]: Reloading... Mar 2 12:58:36.821093 zram_generator::config[2509]: No configuration found. Mar 2 12:58:36.933930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:58:37.039619 systemd[1]: Reloading finished in 334 ms. Mar 2 12:58:37.100516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:58:37.121178 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 12:58:37.121578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:58:37.121720 systemd[1]: kubelet.service: Consumed 9.942s CPU time, 137.6M memory peak, 0B memory swap peak. Mar 2 12:58:37.130432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:58:37.487399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:58:37.493810 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:58:37.599502 kubelet[2554]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:58:37.599502 kubelet[2554]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:58:37.599930 kubelet[2554]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:58:37.599930 kubelet[2554]: I0302 12:58:37.599652 2554 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:58:37.615573 kubelet[2554]: I0302 12:58:37.615320 2554 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 12:58:37.615573 kubelet[2554]: I0302 12:58:37.615370 2554 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:58:37.616591 kubelet[2554]: I0302 12:58:37.616535 2554 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:58:37.618647 kubelet[2554]: I0302 12:58:37.618587 2554 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 12:58:37.622693 kubelet[2554]: I0302 12:58:37.622563 2554 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:58:37.630416 kubelet[2554]: E0302 12:58:37.630383 2554 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:58:37.630595 kubelet[2554]: I0302 12:58:37.630527 2554 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 12:58:37.637026 kubelet[2554]: I0302 12:58:37.636959 2554 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 12:58:37.637633 kubelet[2554]: I0302 12:58:37.637413 2554 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:58:37.637945 kubelet[2554]: I0302 12:58:37.637533 2554 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:58:37.637945 kubelet[2554]: I0302 12:58:37.637937 2554 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:58:37.638339 kubelet[2554]: I0302 12:58:37.637957 2554 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 12:58:37.638591 kubelet[2554]: I0302 12:58:37.638497 2554 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:58:37.639217 kubelet[2554]: I0302 12:58:37.639176 2554 kubelet.go:480] "Attempting to sync node with API server" Mar 2 12:58:37.639217 kubelet[2554]: I0302 12:58:37.639203 2554 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:58:37.639351 kubelet[2554]: I0302 12:58:37.639310 2554 kubelet.go:386] "Adding apiserver pod source" Mar 2 12:58:37.639443 kubelet[2554]: I0302 12:58:37.639389 2554 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:58:37.646118 kubelet[2554]: I0302 12:58:37.642488 2554 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:58:37.646118 kubelet[2554]: I0302 12:58:37.643489 2554 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:58:37.666897 kubelet[2554]: I0302 12:58:37.666867 2554 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 12:58:37.667208 kubelet[2554]: I0302 12:58:37.667189 2554 server.go:1289] "Started kubelet" Mar 2 12:58:37.682884 kubelet[2554]: I0302 12:58:37.682727 2554 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:58:37.686409 kubelet[2554]: I0302 12:58:37.683254 2554 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:58:37.686855 kubelet[2554]: I0302 12:58:37.686683 2554 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:58:37.687658 kubelet[2554]: I0302 12:58:37.687603 2554 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:58:37.688894 kubelet[2554]: I0302 12:58:37.688869 2554 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:58:37.694423 kubelet[2554]: I0302 12:58:37.690354 2554 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 12:58:37.695464 kubelet[2554]: I0302 12:58:37.695403 2554 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:58:37.695593 kubelet[2554]: I0302 12:58:37.695533 2554 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:58:37.695689 kubelet[2554]: I0302 12:58:37.690483 2554 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 12:58:37.696126 kubelet[2554]: I0302 12:58:37.696107 2554 reconciler.go:26] "Reconciler: start to sync state" Mar 2 12:58:37.696388 kubelet[2554]: I0302 12:58:37.690535 2554 server.go:317] "Adding debug handlers to kubelet server" Mar 2 12:58:37.699655 kubelet[2554]: E0302 12:58:37.699106 2554 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:58:37.702494 kubelet[2554]: I0302 12:58:37.702440 2554 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:58:37.714399 kubelet[2554]: I0302 12:58:37.714346 2554 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 12:58:37.740136 kubelet[2554]: I0302 12:58:37.738172 2554 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 12:58:37.741485 kubelet[2554]: I0302 12:58:37.740663 2554 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 12:58:37.741485 kubelet[2554]: I0302 12:58:37.740832 2554 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:58:37.741485 kubelet[2554]: I0302 12:58:37.740854 2554 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 12:58:37.749744 kubelet[2554]: E0302 12:58:37.740925 2554 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:58:37.800638 kubelet[2554]: I0302 12:58:37.800548 2554 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:58:37.800638 kubelet[2554]: I0302 12:58:37.800602 2554 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:58:37.800876 kubelet[2554]: I0302 12:58:37.800687 2554 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:58:37.801162 kubelet[2554]: I0302 12:58:37.801100 2554 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 12:58:37.801316 kubelet[2554]: I0302 12:58:37.801145 2554 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 12:58:37.801316 kubelet[2554]: I0302 12:58:37.801205 2554 policy_none.go:49] "None policy: Start" Mar 2 12:58:37.801316 kubelet[2554]: I0302 12:58:37.801303 2554 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 12:58:37.801425 kubelet[2554]: I0302 12:58:37.801399 2554 state_mem.go:35] "Initializing new in-memory state store" Mar 2 12:58:37.803619 kubelet[2554]: I0302 12:58:37.801633 2554 state_mem.go:75] "Updated machine memory state" Mar 2 12:58:37.813809 kubelet[2554]: E0302 12:58:37.813709 2554 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:58:37.814332 kubelet[2554]: I0302 12:58:37.814270 2554 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:58:37.814516 kubelet[2554]: I0302 12:58:37.814320 2554 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:58:37.815083 kubelet[2554]: I0302 12:58:37.814942 2554 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:58:37.819066 kubelet[2554]: E0302 12:58:37.818114 2554 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:58:37.848992 kubelet[2554]: I0302 12:58:37.848857 2554 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:37.849423 kubelet[2554]: I0302 12:58:37.849351 2554 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:58:37.849685 kubelet[2554]: I0302 12:58:37.849599 2554 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:37.860201 kubelet[2554]: E0302 12:58:37.860095 2554 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:37.861483 kubelet[2554]: E0302 12:58:37.861417 2554 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 12:58:37.862472 kubelet[2554]: E0302 12:58:37.861691 2554 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:37.897853 kubelet[2554]: I0302 12:58:37.897730 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:58:37.897853 kubelet[2554]: I0302 12:58:37.897836 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d16d84641d90158e70d6ed2e75742706-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d16d84641d90158e70d6ed2e75742706\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:37.898113 kubelet[2554]: I0302 12:58:37.897938 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d16d84641d90158e70d6ed2e75742706-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d16d84641d90158e70d6ed2e75742706\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:37.898113 kubelet[2554]: I0302 12:58:37.897976 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d16d84641d90158e70d6ed2e75742706-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d16d84641d90158e70d6ed2e75742706\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:58:37.898113 kubelet[2554]: I0302 12:58:37.898070 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:37.898113 kubelet[2554]: I0302 12:58:37.898100 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:37.898238 kubelet[2554]: I0302 12:58:37.898123 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:37.898238 kubelet[2554]: I0302 12:58:37.898155 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:37.898238 kubelet[2554]: I0302 12:58:37.898182 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:58:37.928118 kubelet[2554]: I0302 12:58:37.928067 2554 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:58:37.938881 kubelet[2554]: I0302 12:58:37.938734 2554 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 12:58:37.938971 kubelet[2554]: I0302 12:58:37.938944 2554 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:58:38.173040 kubelet[2554]: E0302 12:58:38.168614 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:38.173040 kubelet[2554]: E0302 12:58:38.169155 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:38.173040 kubelet[2554]: E0302 12:58:38.169667 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:38.645874 kubelet[2554]: I0302 12:58:38.645375 2554 apiserver.go:52] "Watching apiserver" Mar 2 12:58:38.740549 kubelet[2554]: I0302 12:58:38.739378 2554 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 12:58:38.940654 kubelet[2554]: E0302 12:58:38.933729 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:38.958188 kubelet[2554]: E0302 12:58:38.933766 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:38.961252 kubelet[2554]: E0302 12:58:38.957412 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:39.841601 kubelet[2554]: E0302 12:58:39.841518 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:39.841601 kubelet[2554]: E0302 12:58:39.841542 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:41.190507 kubelet[2554]: I0302 12:58:41.190333 2554 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 12:58:41.191604 kubelet[2554]: I0302 12:58:41.191308 2554 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 12:58:41.191643 containerd[1473]: time="2026-03-02T12:58:41.191048171Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 12:58:43.328727 kubelet[2554]: E0302 12:58:43.328431 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:43.862537 kubelet[2554]: E0302 12:58:43.862496 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:44.250694 kubelet[2554]: E0302 12:58:44.249347 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:44.866140 kubelet[2554]: E0302 12:58:44.865834 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:45.583127 systemd[1]: Created slice kubepods-besteffort-pode7b96014_dd4c_414b_b70e_8a1ca75c64f5.slice - libcontainer container kubepods-besteffort-pode7b96014_dd4c_414b_b70e_8a1ca75c64f5.slice. Mar 2 12:58:45.607739 systemd[1]: Created slice kubepods-besteffort-pod19f47fe4_3bd1_4b9e_b267_aaa1815cf68d.slice - libcontainer container kubepods-besteffort-pod19f47fe4_3bd1_4b9e_b267_aaa1815cf68d.slice. Mar 2 12:58:45.648874 kubelet[2554]: I0302 12:58:45.648757 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7b96014-dd4c-414b-b70e-8a1ca75c64f5-lib-modules\") pod \"kube-proxy-c5z2f\" (UID: \"e7b96014-dd4c-414b-b70e-8a1ca75c64f5\") " pod="kube-system/kube-proxy-c5z2f" Mar 2 12:58:45.648874 kubelet[2554]: I0302 12:58:45.648852 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7b96014-dd4c-414b-b70e-8a1ca75c64f5-kube-proxy\") pod \"kube-proxy-c5z2f\" (UID: \"e7b96014-dd4c-414b-b70e-8a1ca75c64f5\") " pod="kube-system/kube-proxy-c5z2f" Mar 2 12:58:45.648874 kubelet[2554]: I0302 12:58:45.648877 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19f47fe4-3bd1-4b9e-b267-aaa1815cf68d-var-lib-calico\") pod \"tigera-operator-7d4578d8d-xtwxq\" (UID: \"19f47fe4-3bd1-4b9e-b267-aaa1815cf68d\") " pod="tigera-operator/tigera-operator-7d4578d8d-xtwxq" Mar 2 12:58:45.649164 kubelet[2554]: I0302 12:58:45.648898 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7b96014-dd4c-414b-b70e-8a1ca75c64f5-xtables-lock\") pod \"kube-proxy-c5z2f\" (UID: \"e7b96014-dd4c-414b-b70e-8a1ca75c64f5\") " pod="kube-system/kube-proxy-c5z2f" Mar 2 12:58:45.649164 kubelet[2554]: I0302 12:58:45.648915 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxd2\" (UniqueName: \"kubernetes.io/projected/19f47fe4-3bd1-4b9e-b267-aaa1815cf68d-kube-api-access-kdxd2\") pod \"tigera-operator-7d4578d8d-xtwxq\" (UID: \"19f47fe4-3bd1-4b9e-b267-aaa1815cf68d\") " pod="tigera-operator/tigera-operator-7d4578d8d-xtwxq" Mar 2 12:58:45.649164 kubelet[2554]: I0302 12:58:45.648931 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6x5\" (UniqueName: \"kubernetes.io/projected/e7b96014-dd4c-414b-b70e-8a1ca75c64f5-kube-api-access-4f6x5\") pod \"kube-proxy-c5z2f\" (UID: \"e7b96014-dd4c-414b-b70e-8a1ca75c64f5\") " pod="kube-system/kube-proxy-c5z2f" Mar 2 12:58:45.829582 kubelet[2554]: E0302 12:58:45.829519 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:45.868140 kubelet[2554]: E0302 12:58:45.867863 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:45.871593 kubelet[2554]: E0302 12:58:45.871520 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:45.892984 kubelet[2554]: E0302 12:58:45.892888 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:45.894060 containerd[1473]: time="2026-03-02T12:58:45.893967167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c5z2f,Uid:e7b96014-dd4c-414b-b70e-8a1ca75c64f5,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:45.923717 containerd[1473]: time="2026-03-02T12:58:45.922281450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-xtwxq,Uid:19f47fe4-3bd1-4b9e-b267-aaa1815cf68d,Namespace:tigera-operator,Attempt:0,}" Mar 2 12:58:45.970766 containerd[1473]: time="2026-03-02T12:58:45.970194747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:45.970766 containerd[1473]: time="2026-03-02T12:58:45.970480861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:45.970766 containerd[1473]: time="2026-03-02T12:58:45.970537366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:46.203762 containerd[1473]: time="2026-03-02T12:58:45.972059388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:46.219246 containerd[1473]: time="2026-03-02T12:58:46.218850727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:58:46.219246 containerd[1473]: time="2026-03-02T12:58:46.218964160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:58:46.219246 containerd[1473]: time="2026-03-02T12:58:46.218980259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:46.219246 containerd[1473]: time="2026-03-02T12:58:46.219109531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:58:46.257341 systemd[1]: Started cri-containerd-44aec325cd2a9b3204d8efe1e626ebd021ea09661815ec08d3ee6dd2f1c20ae7.scope - libcontainer container 44aec325cd2a9b3204d8efe1e626ebd021ea09661815ec08d3ee6dd2f1c20ae7. Mar 2 12:58:46.298431 systemd[1]: Started cri-containerd-22d16d37b1b96e69dc45a59484a0a66071e4adf595a714b56953b0b36d0c4146.scope - libcontainer container 22d16d37b1b96e69dc45a59484a0a66071e4adf595a714b56953b0b36d0c4146. Mar 2 12:58:46.496664 containerd[1473]: time="2026-03-02T12:58:46.495742896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c5z2f,Uid:e7b96014-dd4c-414b-b70e-8a1ca75c64f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"44aec325cd2a9b3204d8efe1e626ebd021ea09661815ec08d3ee6dd2f1c20ae7\"" Mar 2 12:58:46.497392 kubelet[2554]: E0302 12:58:46.497363 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:46.516098 containerd[1473]: time="2026-03-02T12:58:46.515895260Z" level=info msg="CreateContainer within sandbox \"44aec325cd2a9b3204d8efe1e626ebd021ea09661815ec08d3ee6dd2f1c20ae7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 12:58:46.703715 containerd[1473]: time="2026-03-02T12:58:46.703541653Z" level=info msg="CreateContainer within sandbox \"44aec325cd2a9b3204d8efe1e626ebd021ea09661815ec08d3ee6dd2f1c20ae7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe03c8444c547766109eab7869481454e50cee6e31c2b3295923fff4ef1d1c36\"" Mar 2 12:58:46.705199 containerd[1473]: time="2026-03-02T12:58:46.704976731Z" level=info msg="StartContainer for \"fe03c8444c547766109eab7869481454e50cee6e31c2b3295923fff4ef1d1c36\"" Mar 2 12:58:46.781364 systemd[1]: Started cri-containerd-fe03c8444c547766109eab7869481454e50cee6e31c2b3295923fff4ef1d1c36.scope - libcontainer container fe03c8444c547766109eab7869481454e50cee6e31c2b3295923fff4ef1d1c36. Mar 2 12:58:46.787942 containerd[1473]: time="2026-03-02T12:58:46.787721950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-xtwxq,Uid:19f47fe4-3bd1-4b9e-b267-aaa1815cf68d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"22d16d37b1b96e69dc45a59484a0a66071e4adf595a714b56953b0b36d0c4146\"" Mar 2 12:58:46.792610 containerd[1473]: time="2026-03-02T12:58:46.792510378Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 12:58:46.951428 kubelet[2554]: E0302 12:58:46.951342 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:46.996900 containerd[1473]: time="2026-03-02T12:58:46.996707105Z" level=info msg="StartContainer for \"fe03c8444c547766109eab7869481454e50cee6e31c2b3295923fff4ef1d1c36\" returns successfully" Mar 2 12:58:47.955239 kubelet[2554]: E0302 12:58:47.955146 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:48.592125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886870354.mount: Deactivated successfully. Mar 2 12:58:48.957526 kubelet[2554]: E0302 12:58:48.957105 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:50.977473 containerd[1473]: time="2026-03-02T12:58:50.977297457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:50.978560 containerd[1473]: time="2026-03-02T12:58:50.978451805Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=40822719" Mar 2 12:58:50.979866 containerd[1473]: time="2026-03-02T12:58:50.979775096Z" level=info msg="ImageCreate event name:\"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:50.982287 containerd[1473]: time="2026-03-02T12:58:50.982236060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:50.983462 containerd[1473]: time="2026-03-02T12:58:50.983424020Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"40818714\" in 4.190862467s" Mar 2 12:58:50.983462 containerd[1473]: time="2026-03-02T12:58:50.983455319Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\"" Mar 2 12:58:50.988673 containerd[1473]: time="2026-03-02T12:58:50.988568712Z" level=info msg="CreateContainer within sandbox \"22d16d37b1b96e69dc45a59484a0a66071e4adf595a714b56953b0b36d0c4146\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 12:58:51.005150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628151037.mount: Deactivated successfully. Mar 2 12:58:51.006833 containerd[1473]: time="2026-03-02T12:58:51.006758412Z" level=info msg="CreateContainer within sandbox \"22d16d37b1b96e69dc45a59484a0a66071e4adf595a714b56953b0b36d0c4146\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0673a1031e789e4927207a29cf2b11ca3b85e542f5580b8b2a97f461b2d7ec7\"" Mar 2 12:58:51.007428 containerd[1473]: time="2026-03-02T12:58:51.007386340Z" level=info msg="StartContainer for \"a0673a1031e789e4927207a29cf2b11ca3b85e542f5580b8b2a97f461b2d7ec7\"" Mar 2 12:58:51.063271 systemd[1]: Started cri-containerd-a0673a1031e789e4927207a29cf2b11ca3b85e542f5580b8b2a97f461b2d7ec7.scope - libcontainer container a0673a1031e789e4927207a29cf2b11ca3b85e542f5580b8b2a97f461b2d7ec7. Mar 2 12:58:51.144833 containerd[1473]: time="2026-03-02T12:58:51.144737621Z" level=info msg="StartContainer for \"a0673a1031e789e4927207a29cf2b11ca3b85e542f5580b8b2a97f461b2d7ec7\" returns successfully" Mar 2 12:58:51.975726 kubelet[2554]: I0302 12:58:51.975486 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c5z2f" podStartSLOduration=9.975467587 podStartE2EDuration="9.975467587s" podCreationTimestamp="2026-03-02 12:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:58:47.974450016 +0000 UTC m=+10.456132605" watchObservedRunningTime="2026-03-02 12:58:51.975467587 +0000 UTC m=+14.457150177" Mar 2 12:58:57.988070 sudo[1644]: pam_unix(sudo:session): session closed for user root Mar 2 12:58:57.997598 sshd[1641]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:58.008529 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:38234.service: Deactivated successfully. Mar 2 12:58:58.016631 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 12:58:58.018220 systemd[1]: session-7.scope: Consumed 9.503s CPU time, 162.8M memory peak, 0B memory swap peak. Mar 2 12:58:58.021834 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Mar 2 12:58:58.024616 systemd-logind[1456]: Removed session 7. Mar 2 12:59:00.565727 kubelet[2554]: I0302 12:59:00.565647 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d4578d8d-xtwxq" podStartSLOduration=14.372934915 podStartE2EDuration="18.565627738s" podCreationTimestamp="2026-03-02 12:58:42 +0000 UTC" firstStartedPulling="2026-03-02 12:58:46.791758144 +0000 UTC m=+9.273440734" lastFinishedPulling="2026-03-02 12:58:50.984450967 +0000 UTC m=+13.466133557" observedRunningTime="2026-03-02 12:58:51.97561924 +0000 UTC m=+14.457301840" watchObservedRunningTime="2026-03-02 12:59:00.565627738 +0000 UTC m=+23.047310348" Mar 2 12:59:00.629634 systemd[1]: Created slice kubepods-besteffort-podc8cb2d7d_1438_4ba2_8054_7fc452a1f3a7.slice - libcontainer container kubepods-besteffort-podc8cb2d7d_1438_4ba2_8054_7fc452a1f3a7.slice. Mar 2 12:59:00.648339 systemd[1]: Created slice kubepods-besteffort-pod434a676c_bbf4_473f_bd48_005840362892.slice - libcontainer container kubepods-besteffort-pod434a676c_bbf4_473f_bd48_005840362892.slice. Mar 2 12:59:00.719525 kubelet[2554]: I0302 12:59:00.719387 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-var-run-calico\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719525 kubelet[2554]: I0302 12:59:00.719451 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/434a676c-bbf4-473f-bd48-005840362892-node-certs\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719525 kubelet[2554]: I0302 12:59:00.719473 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-cni-log-dir\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719525 kubelet[2554]: I0302 12:59:00.719490 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-cni-net-dir\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719525 kubelet[2554]: I0302 12:59:00.719539 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-flexvol-driver-host\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719987 kubelet[2554]: I0302 12:59:00.719558 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-xtables-lock\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719987 kubelet[2554]: I0302 12:59:00.719605 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7-typha-certs\") pod \"calico-typha-568fccf8cb-2rwt6\" (UID: \"c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7\") " pod="calico-system/calico-typha-568fccf8cb-2rwt6" Mar 2 12:59:00.719987 kubelet[2554]: I0302 12:59:00.719680 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-var-lib-calico\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719987 kubelet[2554]: I0302 12:59:00.719712 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8jz\" (UniqueName: \"kubernetes.io/projected/434a676c-bbf4-473f-bd48-005840362892-kube-api-access-gn8jz\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.719987 kubelet[2554]: I0302 12:59:00.719793 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-nodeproc\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.720329 kubelet[2554]: I0302 12:59:00.719865 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-sys-fs\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.720329 kubelet[2554]: I0302 12:59:00.719890 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnm42\" (UniqueName: \"kubernetes.io/projected/c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7-kube-api-access-fnm42\") pod \"calico-typha-568fccf8cb-2rwt6\" (UID: \"c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7\") " pod="calico-system/calico-typha-568fccf8cb-2rwt6" Mar 2 12:59:00.720329 kubelet[2554]: I0302 12:59:00.719905 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-policysync\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.720329 kubelet[2554]: I0302 12:59:00.719925 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7-tigera-ca-bundle\") pod \"calico-typha-568fccf8cb-2rwt6\" (UID: \"c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7\") " pod="calico-system/calico-typha-568fccf8cb-2rwt6" Mar 2 12:59:00.720329 kubelet[2554]: I0302 12:59:00.719940 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-lib-modules\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.720625 kubelet[2554]: I0302 12:59:00.719989 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-bpffs\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.720625 kubelet[2554]: I0302 12:59:00.720189 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/434a676c-bbf4-473f-bd48-005840362892-cni-bin-dir\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.720625 kubelet[2554]: I0302 12:59:00.720232 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/434a676c-bbf4-473f-bd48-005840362892-tigera-ca-bundle\") pod \"calico-node-gtq5m\" (UID: \"434a676c-bbf4-473f-bd48-005840362892\") " pod="calico-system/calico-node-gtq5m" Mar 2 12:59:00.810735 kubelet[2554]: E0302 12:59:00.810647 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:00.823897 kubelet[2554]: E0302 12:59:00.823715 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.823897 kubelet[2554]: W0302 12:59:00.823763 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.824146 kubelet[2554]: E0302 12:59:00.823837 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.824413 kubelet[2554]: E0302 12:59:00.824354 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.824413 kubelet[2554]: W0302 12:59:00.824391 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.824413 kubelet[2554]: E0302 12:59:00.824406 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.825132 kubelet[2554]: E0302 12:59:00.824763 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.825132 kubelet[2554]: W0302 12:59:00.824781 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.825132 kubelet[2554]: E0302 12:59:00.824795 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.825450 kubelet[2554]: E0302 12:59:00.825428 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.825526 kubelet[2554]: W0302 12:59:00.825512 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.825615 kubelet[2554]: E0302 12:59:00.825599 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.840066 kubelet[2554]: E0302 12:59:00.836953 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.840228 kubelet[2554]: W0302 12:59:00.840207 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.840303 kubelet[2554]: E0302 12:59:00.840290 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.845299 kubelet[2554]: E0302 12:59:00.845241 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.845299 kubelet[2554]: W0302 12:59:00.845292 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.845467 kubelet[2554]: E0302 12:59:00.845314 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.851766 kubelet[2554]: E0302 12:59:00.851712 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.851766 kubelet[2554]: W0302 12:59:00.851746 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.851766 kubelet[2554]: E0302 12:59:00.851761 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.853070 kubelet[2554]: E0302 12:59:00.852967 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.853152 kubelet[2554]: W0302 12:59:00.853115 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.853228 kubelet[2554]: E0302 12:59:00.853201 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.853472 kubelet[2554]: E0302 12:59:00.853435 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.853472 kubelet[2554]: W0302 12:59:00.853471 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.853472 kubelet[2554]: E0302 12:59:00.853484 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.853715 kubelet[2554]: E0302 12:59:00.853703 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.853715 kubelet[2554]: W0302 12:59:00.853714 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.853943 kubelet[2554]: E0302 12:59:00.853722 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.854139 kubelet[2554]: E0302 12:59:00.854089 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.854139 kubelet[2554]: W0302 12:59:00.854125 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.854232 kubelet[2554]: E0302 12:59:00.854136 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.854392 kubelet[2554]: E0302 12:59:00.854361 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.854392 kubelet[2554]: W0302 12:59:00.854380 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.854599 kubelet[2554]: E0302 12:59:00.854393 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.855763 kubelet[2554]: E0302 12:59:00.855636 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.855763 kubelet[2554]: W0302 12:59:00.855681 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.855763 kubelet[2554]: E0302 12:59:00.855697 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.857381 kubelet[2554]: E0302 12:59:00.857298 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.857381 kubelet[2554]: W0302 12:59:00.857330 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.857381 kubelet[2554]: E0302 12:59:00.857343 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.858298 kubelet[2554]: E0302 12:59:00.858261 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.858298 kubelet[2554]: W0302 12:59:00.858288 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.858298 kubelet[2554]: E0302 12:59:00.858299 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.858606 kubelet[2554]: E0302 12:59:00.858563 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.858606 kubelet[2554]: W0302 12:59:00.858589 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.858606 kubelet[2554]: E0302 12:59:00.858600 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.859091 kubelet[2554]: E0302 12:59:00.858960 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.859091 kubelet[2554]: W0302 12:59:00.858990 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.859091 kubelet[2554]: E0302 12:59:00.859045 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.859365 kubelet[2554]: E0302 12:59:00.859328 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.859365 kubelet[2554]: W0302 12:59:00.859341 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.859365 kubelet[2554]: E0302 12:59:00.859350 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.860083 kubelet[2554]: E0302 12:59:00.859662 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.860083 kubelet[2554]: W0302 12:59:00.859692 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.860083 kubelet[2554]: E0302 12:59:00.859703 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.860193 kubelet[2554]: E0302 12:59:00.860116 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.860193 kubelet[2554]: W0302 12:59:00.860127 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.860193 kubelet[2554]: E0302 12:59:00.860136 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.860457 kubelet[2554]: E0302 12:59:00.860436 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.860457 kubelet[2554]: W0302 12:59:00.860448 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.860457 kubelet[2554]: E0302 12:59:00.860457 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.860746 kubelet[2554]: E0302 12:59:00.860719 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.860746 kubelet[2554]: W0302 12:59:00.860742 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.860854 kubelet[2554]: E0302 12:59:00.860751 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.861660 kubelet[2554]: E0302 12:59:00.861626 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.861660 kubelet[2554]: W0302 12:59:00.861639 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.861660 kubelet[2554]: E0302 12:59:00.861649 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.864080 kubelet[2554]: E0302 12:59:00.863960 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.864080 kubelet[2554]: W0302 12:59:00.864076 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.864195 kubelet[2554]: E0302 12:59:00.864091 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.865457 kubelet[2554]: E0302 12:59:00.865414 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.865457 kubelet[2554]: W0302 12:59:00.865443 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.865457 kubelet[2554]: E0302 12:59:00.865454 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.867297 kubelet[2554]: E0302 12:59:00.867245 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.867297 kubelet[2554]: W0302 12:59:00.867286 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.867400 kubelet[2554]: E0302 12:59:00.867301 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.868677 kubelet[2554]: E0302 12:59:00.868477 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.868677 kubelet[2554]: W0302 12:59:00.868489 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.868677 kubelet[2554]: E0302 12:59:00.868499 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.870203 kubelet[2554]: E0302 12:59:00.870120 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.870203 kubelet[2554]: W0302 12:59:00.870151 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.870203 kubelet[2554]: E0302 12:59:00.870163 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.922847 kubelet[2554]: E0302 12:59:00.922722 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.922847 kubelet[2554]: W0302 12:59:00.922774 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.922847 kubelet[2554]: E0302 12:59:00.922840 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.923251 kubelet[2554]: I0302 12:59:00.922954 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ac16a752-3fe7-46bc-9e8d-01b34213f083-varrun\") pod \"csi-node-driver-w67n9\" (UID: \"ac16a752-3fe7-46bc-9e8d-01b34213f083\") " pod="calico-system/csi-node-driver-w67n9" Mar 2 12:59:00.923512 kubelet[2554]: E0302 12:59:00.923416 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.923512 kubelet[2554]: W0302 12:59:00.923455 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.923512 kubelet[2554]: E0302 12:59:00.923478 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.923960 kubelet[2554]: E0302 12:59:00.923869 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.923960 kubelet[2554]: W0302 12:59:00.923901 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.923960 kubelet[2554]: E0302 12:59:00.923918 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.924469 kubelet[2554]: E0302 12:59:00.924407 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.924469 kubelet[2554]: W0302 12:59:00.924443 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.924469 kubelet[2554]: E0302 12:59:00.924461 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.924742 kubelet[2554]: I0302 12:59:00.924513 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac16a752-3fe7-46bc-9e8d-01b34213f083-registration-dir\") pod \"csi-node-driver-w67n9\" (UID: \"ac16a752-3fe7-46bc-9e8d-01b34213f083\") " pod="calico-system/csi-node-driver-w67n9" Mar 2 12:59:00.925298 kubelet[2554]: E0302 12:59:00.924956 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.925298 kubelet[2554]: W0302 12:59:00.924974 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.925298 kubelet[2554]: E0302 12:59:00.925146 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.925298 kubelet[2554]: I0302 12:59:00.925187 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac16a752-3fe7-46bc-9e8d-01b34213f083-socket-dir\") pod \"csi-node-driver-w67n9\" (UID: \"ac16a752-3fe7-46bc-9e8d-01b34213f083\") " pod="calico-system/csi-node-driver-w67n9" Mar 2 12:59:00.925872 kubelet[2554]: E0302 12:59:00.925646 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.925872 kubelet[2554]: W0302 12:59:00.925662 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.925872 kubelet[2554]: E0302 12:59:00.925675 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.926346 kubelet[2554]: I0302 12:59:00.926097 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac16a752-3fe7-46bc-9e8d-01b34213f083-kubelet-dir\") pod \"csi-node-driver-w67n9\" (UID: \"ac16a752-3fe7-46bc-9e8d-01b34213f083\") " pod="calico-system/csi-node-driver-w67n9" Mar 2 12:59:00.926346 kubelet[2554]: E0302 12:59:00.926309 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.926346 kubelet[2554]: W0302 12:59:00.926324 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.926346 kubelet[2554]: E0302 12:59:00.926338 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.927183 kubelet[2554]: E0302 12:59:00.926743 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.927183 kubelet[2554]: W0302 12:59:00.926755 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.927183 kubelet[2554]: E0302 12:59:00.926769 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.927410 kubelet[2554]: E0302 12:59:00.927391 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.927453 kubelet[2554]: W0302 12:59:00.927409 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.927453 kubelet[2554]: E0302 12:59:00.927425 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.928637 kubelet[2554]: E0302 12:59:00.928472 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.928637 kubelet[2554]: W0302 12:59:00.928492 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.928637 kubelet[2554]: E0302 12:59:00.928508 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.928772 kubelet[2554]: I0302 12:59:00.928687 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svfp5\" (UniqueName: \"kubernetes.io/projected/ac16a752-3fe7-46bc-9e8d-01b34213f083-kube-api-access-svfp5\") pod \"csi-node-driver-w67n9\" (UID: \"ac16a752-3fe7-46bc-9e8d-01b34213f083\") " pod="calico-system/csi-node-driver-w67n9" Mar 2 12:59:00.929160 kubelet[2554]: E0302 12:59:00.929126 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.929160 kubelet[2554]: W0302 12:59:00.929157 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.929160 kubelet[2554]: E0302 12:59:00.929171 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.929633 kubelet[2554]: E0302 12:59:00.929592 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.929694 kubelet[2554]: W0302 12:59:00.929677 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.929785 kubelet[2554]: E0302 12:59:00.929696 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.930345 kubelet[2554]: E0302 12:59:00.930309 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.930345 kubelet[2554]: W0302 12:59:00.930342 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.930443 kubelet[2554]: E0302 12:59:00.930358 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.930863 kubelet[2554]: E0302 12:59:00.930777 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.930863 kubelet[2554]: W0302 12:59:00.930838 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.930863 kubelet[2554]: E0302 12:59:00.930854 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.932925 kubelet[2554]: E0302 12:59:00.931351 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:00.932925 kubelet[2554]: W0302 12:59:00.931365 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:00.932925 kubelet[2554]: E0302 12:59:00.931375 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:00.934748 kubelet[2554]: E0302 12:59:00.934596 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:00.936057 containerd[1473]: time="2026-03-02T12:59:00.935621251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-568fccf8cb-2rwt6,Uid:c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:00.958333 containerd[1473]: time="2026-03-02T12:59:00.956843339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gtq5m,Uid:434a676c-bbf4-473f-bd48-005840362892,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:01.013239 containerd[1473]: time="2026-03-02T12:59:01.012983903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:01.013239 containerd[1473]: time="2026-03-02T12:59:01.013215595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:01.013440 containerd[1473]: time="2026-03-02T12:59:01.013292689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:01.013711 containerd[1473]: time="2026-03-02T12:59:01.013506769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:01.030533 containerd[1473]: time="2026-03-02T12:59:01.030263001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:01.030533 containerd[1473]: time="2026-03-02T12:59:01.030347999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:01.030533 containerd[1473]: time="2026-03-02T12:59:01.030361414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:01.030533 containerd[1473]: time="2026-03-02T12:59:01.030455961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:01.034047 kubelet[2554]: E0302 12:59:01.033262 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.034047 kubelet[2554]: W0302 12:59:01.033291 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.034047 kubelet[2554]: E0302 12:59:01.033321 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.037222 kubelet[2554]: E0302 12:59:01.037165 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.037222 kubelet[2554]: W0302 12:59:01.037211 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.037339 kubelet[2554]: E0302 12:59:01.037241 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.038585 kubelet[2554]: E0302 12:59:01.038561 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.038699 kubelet[2554]: W0302 12:59:01.038681 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.038786 kubelet[2554]: E0302 12:59:01.038768 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.039427 kubelet[2554]: E0302 12:59:01.039407 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.040146 kubelet[2554]: W0302 12:59:01.040126 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.040312 kubelet[2554]: E0302 12:59:01.040242 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.040933 kubelet[2554]: E0302 12:59:01.040916 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.041391 kubelet[2554]: W0302 12:59:01.041125 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.041391 kubelet[2554]: E0302 12:59:01.041149 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.041554 systemd[1]: Started cri-containerd-dbd6f7c6c8c005e773843fb875420f868e9aadff64e45263cb4229b9bfd85d4e.scope - libcontainer container dbd6f7c6c8c005e773843fb875420f868e9aadff64e45263cb4229b9bfd85d4e. Mar 2 12:59:01.044559 kubelet[2554]: E0302 12:59:01.044320 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.044559 kubelet[2554]: W0302 12:59:01.044334 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.044559 kubelet[2554]: E0302 12:59:01.044348 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.045088 kubelet[2554]: E0302 12:59:01.044755 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.045088 kubelet[2554]: W0302 12:59:01.044768 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.045088 kubelet[2554]: E0302 12:59:01.044782 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.046484 kubelet[2554]: E0302 12:59:01.046266 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.046484 kubelet[2554]: W0302 12:59:01.046283 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.046484 kubelet[2554]: E0302 12:59:01.046299 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.046782 kubelet[2554]: E0302 12:59:01.046767 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.047077 kubelet[2554]: W0302 12:59:01.046886 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.047077 kubelet[2554]: E0302 12:59:01.046915 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.047603 kubelet[2554]: E0302 12:59:01.047587 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.048069 kubelet[2554]: W0302 12:59:01.047979 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.048312 kubelet[2554]: E0302 12:59:01.048157 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.048523 kubelet[2554]: E0302 12:59:01.048507 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.048660 kubelet[2554]: W0302 12:59:01.048582 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.048660 kubelet[2554]: E0302 12:59:01.048601 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.049373 kubelet[2554]: E0302 12:59:01.049272 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.049373 kubelet[2554]: W0302 12:59:01.049288 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.049373 kubelet[2554]: E0302 12:59:01.049302 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.050102 kubelet[2554]: E0302 12:59:01.049973 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.050102 kubelet[2554]: W0302 12:59:01.049990 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.050102 kubelet[2554]: E0302 12:59:01.050082 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.050842 kubelet[2554]: E0302 12:59:01.050683 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.050842 kubelet[2554]: W0302 12:59:01.050698 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.050842 kubelet[2554]: E0302 12:59:01.050709 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.051165 kubelet[2554]: E0302 12:59:01.051150 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.051356 kubelet[2554]: W0302 12:59:01.051224 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.051356 kubelet[2554]: E0302 12:59:01.051240 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.051774 kubelet[2554]: E0302 12:59:01.051760 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.052065 kubelet[2554]: W0302 12:59:01.051871 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.052065 kubelet[2554]: E0302 12:59:01.051890 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.052440 kubelet[2554]: E0302 12:59:01.052426 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.052673 kubelet[2554]: W0302 12:59:01.052542 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.052673 kubelet[2554]: E0302 12:59:01.052560 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.053201 kubelet[2554]: E0302 12:59:01.053162 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.053201 kubelet[2554]: W0302 12:59:01.053175 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.053201 kubelet[2554]: E0302 12:59:01.053187 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.054128 kubelet[2554]: E0302 12:59:01.053874 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.054128 kubelet[2554]: W0302 12:59:01.053927 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.054128 kubelet[2554]: E0302 12:59:01.053943 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.054383 kubelet[2554]: E0302 12:59:01.054370 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.054647 kubelet[2554]: W0302 12:59:01.054582 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.054736 kubelet[2554]: E0302 12:59:01.054720 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.056207 kubelet[2554]: E0302 12:59:01.055463 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.056327 kubelet[2554]: W0302 12:59:01.056290 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.056327 kubelet[2554]: E0302 12:59:01.056309 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.059413 kubelet[2554]: E0302 12:59:01.059345 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.059413 kubelet[2554]: W0302 12:59:01.059359 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.059413 kubelet[2554]: E0302 12:59:01.059369 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.060075 kubelet[2554]: E0302 12:59:01.059910 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.060075 kubelet[2554]: W0302 12:59:01.059922 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.060075 kubelet[2554]: E0302 12:59:01.059937 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.060448 kubelet[2554]: E0302 12:59:01.060435 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.060501 kubelet[2554]: W0302 12:59:01.060491 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.060604 kubelet[2554]: E0302 12:59:01.060557 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.061839 kubelet[2554]: E0302 12:59:01.061107 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.061839 kubelet[2554]: W0302 12:59:01.061121 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.061839 kubelet[2554]: E0302 12:59:01.061130 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.063275 systemd[1]: Started cri-containerd-f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c.scope - libcontainer container f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c. Mar 2 12:59:01.074450 kubelet[2554]: E0302 12:59:01.074371 2554 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:59:01.074565 kubelet[2554]: W0302 12:59:01.074519 2554 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:59:01.074565 kubelet[2554]: E0302 12:59:01.074537 2554 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:59:01.099363 containerd[1473]: time="2026-03-02T12:59:01.099264761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gtq5m,Uid:434a676c-bbf4-473f-bd48-005840362892,Namespace:calico-system,Attempt:0,} returns sandbox id \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\"" Mar 2 12:59:01.103095 containerd[1473]: time="2026-03-02T12:59:01.102285471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 12:59:01.135782 containerd[1473]: time="2026-03-02T12:59:01.135748144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-568fccf8cb-2rwt6,Uid:c8cb2d7d-1438-4ba2-8054-7fc452a1f3a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbd6f7c6c8c005e773843fb875420f868e9aadff64e45263cb4229b9bfd85d4e\"" Mar 2 12:59:01.136616 kubelet[2554]: E0302 12:59:01.136541 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:01.860095 containerd[1473]: time="2026-03-02T12:59:01.859927613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:01.861049 containerd[1473]: time="2026-03-02T12:59:01.860889123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=6186335" Mar 2 12:59:01.862151 containerd[1473]: time="2026-03-02T12:59:01.862073970Z" level=info msg="ImageCreate event name:\"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:01.865332 containerd[1473]: time="2026-03-02T12:59:01.865258179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:01.866371 containerd[1473]: time="2026-03-02T12:59:01.866248316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"6186157\" in 763.928972ms" Mar 2 12:59:01.866371 containerd[1473]: time="2026-03-02T12:59:01.866302327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\"" Mar 2 12:59:01.869183 containerd[1473]: time="2026-03-02T12:59:01.867510383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 12:59:01.871445 containerd[1473]: time="2026-03-02T12:59:01.871407716Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 12:59:01.892279 containerd[1473]: time="2026-03-02T12:59:01.892192347Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2\"" Mar 2 12:59:01.894131 containerd[1473]: time="2026-03-02T12:59:01.894102936Z" level=info msg="StartContainer for \"5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2\"" Mar 2 12:59:01.950201 systemd[1]: Started cri-containerd-5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2.scope - libcontainer container 5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2. Mar 2 12:59:01.996769 containerd[1473]: time="2026-03-02T12:59:01.996706053Z" level=info msg="StartContainer for \"5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2\" returns successfully" Mar 2 12:59:02.016765 systemd[1]: cri-containerd-5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2.scope: Deactivated successfully. Mar 2 12:59:02.131868 containerd[1473]: time="2026-03-02T12:59:02.131360297Z" level=info msg="shim disconnected" id=5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2 namespace=k8s.io Mar 2 12:59:02.131868 containerd[1473]: time="2026-03-02T12:59:02.131627947Z" level=warning msg="cleaning up after shim disconnected" id=5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2 namespace=k8s.io Mar 2 12:59:02.131868 containerd[1473]: time="2026-03-02T12:59:02.131655799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:59:02.741786 kubelet[2554]: E0302 12:59:02.741685 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:02.836367 systemd[1]: run-containerd-runc-k8s.io-5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2-runc.8y9mhU.mount: Deactivated successfully. Mar 2 12:59:02.836509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e27d9505dfac4fb13d4fdec50f67fa212b03fd7e0c4339e11f758293f97dbe2-rootfs.mount: Deactivated successfully. Mar 2 12:59:03.537124 containerd[1473]: time="2026-03-02T12:59:03.536942515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:03.538175 containerd[1473]: time="2026-03-02T12:59:03.538111705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=34538513" Mar 2 12:59:03.539774 containerd[1473]: time="2026-03-02T12:59:03.539721600Z" level=info msg="ImageCreate event name:\"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:03.543448 containerd[1473]: time="2026-03-02T12:59:03.543349203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:03.544199 containerd[1473]: time="2026-03-02T12:59:03.544150310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"36094550\" in 1.676609038s" Mar 2 12:59:03.544244 containerd[1473]: time="2026-03-02T12:59:03.544200243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\"" Mar 2 12:59:03.545740 containerd[1473]: time="2026-03-02T12:59:03.545634223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 12:59:03.560598 containerd[1473]: time="2026-03-02T12:59:03.560556857Z" level=info msg="CreateContainer within sandbox \"dbd6f7c6c8c005e773843fb875420f868e9aadff64e45263cb4229b9bfd85d4e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 12:59:03.576162 containerd[1473]: time="2026-03-02T12:59:03.576080199Z" level=info msg="CreateContainer within sandbox \"dbd6f7c6c8c005e773843fb875420f868e9aadff64e45263cb4229b9bfd85d4e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"94bbcb2d572ff9f20caf7ad0fc853701873afb341209cd09e3795abb971f4210\"" Mar 2 12:59:03.576955 containerd[1473]: time="2026-03-02T12:59:03.576929446Z" level=info msg="StartContainer for \"94bbcb2d572ff9f20caf7ad0fc853701873afb341209cd09e3795abb971f4210\"" Mar 2 12:59:03.629218 systemd[1]: Started cri-containerd-94bbcb2d572ff9f20caf7ad0fc853701873afb341209cd09e3795abb971f4210.scope - libcontainer container 94bbcb2d572ff9f20caf7ad0fc853701873afb341209cd09e3795abb971f4210. Mar 2 12:59:03.688900 containerd[1473]: time="2026-03-02T12:59:03.688634590Z" level=info msg="StartContainer for \"94bbcb2d572ff9f20caf7ad0fc853701873afb341209cd09e3795abb971f4210\" returns successfully" Mar 2 12:59:04.047119 kubelet[2554]: E0302 12:59:04.044285 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:04.072917 kubelet[2554]: I0302 12:59:04.072711 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-568fccf8cb-2rwt6" podStartSLOduration=1.664350454 podStartE2EDuration="4.072665774s" podCreationTimestamp="2026-03-02 12:59:00 +0000 UTC" firstStartedPulling="2026-03-02 12:59:01.13722258 +0000 UTC m=+23.618905170" lastFinishedPulling="2026-03-02 12:59:03.5455379 +0000 UTC m=+26.027220490" observedRunningTime="2026-03-02 12:59:04.071954837 +0000 UTC m=+26.553637466" watchObservedRunningTime="2026-03-02 12:59:04.072665774 +0000 UTC m=+26.554348364" Mar 2 12:59:04.765281 kubelet[2554]: E0302 12:59:04.764858 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:05.058122 kubelet[2554]: I0302 12:59:05.056187 2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:59:05.058122 kubelet[2554]: E0302 12:59:05.057643 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:06.742512 kubelet[2554]: E0302 12:59:06.742252 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:08.742585 kubelet[2554]: E0302 12:59:08.742473 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:09.992296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount551018587.mount: Deactivated successfully. Mar 2 12:59:10.302806 containerd[1473]: time="2026-03-02T12:59:10.302632754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:10.304055 containerd[1473]: time="2026-03-02T12:59:10.303903432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=159483365" Mar 2 12:59:10.305289 containerd[1473]: time="2026-03-02T12:59:10.305231204Z" level=info msg="ImageCreate event name:\"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:10.308108 containerd[1473]: time="2026-03-02T12:59:10.307992027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:10.309518 containerd[1473]: time="2026-03-02T12:59:10.309413743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"159483227\" in 6.763732041s" Mar 2 12:59:10.309518 containerd[1473]: time="2026-03-02T12:59:10.309453997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\"" Mar 2 12:59:10.322612 containerd[1473]: time="2026-03-02T12:59:10.321801168Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 12:59:10.345792 containerd[1473]: time="2026-03-02T12:59:10.345740627Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411\"" Mar 2 12:59:10.347117 containerd[1473]: time="2026-03-02T12:59:10.346940425Z" level=info msg="StartContainer for \"328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411\"" Mar 2 12:59:10.436276 systemd[1]: Started cri-containerd-328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411.scope - libcontainer container 328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411. Mar 2 12:59:10.488444 containerd[1473]: time="2026-03-02T12:59:10.488349231Z" level=info msg="StartContainer for \"328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411\" returns successfully" Mar 2 12:59:10.583514 systemd[1]: cri-containerd-328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411.scope: Deactivated successfully. Mar 2 12:59:10.743415 kubelet[2554]: E0302 12:59:10.742763 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:10.765043 containerd[1473]: time="2026-03-02T12:59:10.764880296Z" level=info msg="shim disconnected" id=328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411 namespace=k8s.io Mar 2 12:59:10.765043 containerd[1473]: time="2026-03-02T12:59:10.764960816Z" level=warning msg="cleaning up after shim disconnected" id=328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411 namespace=k8s.io Mar 2 12:59:10.765242 containerd[1473]: time="2026-03-02T12:59:10.765049912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:59:10.992930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-328245292c71a06a885eebf215b230d71e7025dae063d46bfac3880ab8512411-rootfs.mount: Deactivated successfully. Mar 2 12:59:11.137619 containerd[1473]: time="2026-03-02T12:59:11.137230046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 12:59:12.741627 kubelet[2554]: E0302 12:59:12.741440 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:14.744601 kubelet[2554]: E0302 12:59:14.742340 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:16.360860 containerd[1473]: time="2026-03-02T12:59:16.360570730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:16.362783 containerd[1473]: time="2026-03-02T12:59:16.362211313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=70584418" Mar 2 12:59:16.365210 containerd[1473]: time="2026-03-02T12:59:16.365141377Z" level=info msg="ImageCreate event name:\"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:16.368901 containerd[1473]: time="2026-03-02T12:59:16.368814801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:16.370680 containerd[1473]: time="2026-03-02T12:59:16.370584581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"72140463\" in 5.23328265s" Mar 2 12:59:16.370680 containerd[1473]: time="2026-03-02T12:59:16.370665812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\"" Mar 2 12:59:16.388145 containerd[1473]: time="2026-03-02T12:59:16.387950893Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 12:59:16.534215 containerd[1473]: time="2026-03-02T12:59:16.533898214Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a\"" Mar 2 12:59:16.543672 containerd[1473]: time="2026-03-02T12:59:16.543545572Z" level=info msg="StartContainer for \"6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a\"" Mar 2 12:59:16.633726 systemd[1]: Started cri-containerd-6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a.scope - libcontainer container 6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a. Mar 2 12:59:16.687686 containerd[1473]: time="2026-03-02T12:59:16.687534689Z" level=info msg="StartContainer for \"6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a\" returns successfully" Mar 2 12:59:16.742631 kubelet[2554]: E0302 12:59:16.742384 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w67n9" podUID="ac16a752-3fe7-46bc-9e8d-01b34213f083" Mar 2 12:59:17.620923 systemd[1]: cri-containerd-6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a.scope: Deactivated successfully. Mar 2 12:59:17.621723 systemd[1]: cri-containerd-6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a.scope: Consumed 1.153s CPU time. Mar 2 12:59:17.662495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a-rootfs.mount: Deactivated successfully. Mar 2 12:59:17.668560 containerd[1473]: time="2026-03-02T12:59:17.668444954Z" level=info msg="shim disconnected" id=6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a namespace=k8s.io Mar 2 12:59:17.668560 containerd[1473]: time="2026-03-02T12:59:17.668545141Z" level=warning msg="cleaning up after shim disconnected" id=6c2153b2c1a78c89fc55ec073c224082f487fca1f7fd14e486be79f2cf11f87a namespace=k8s.io Mar 2 12:59:17.668560 containerd[1473]: time="2026-03-02T12:59:17.668560509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:59:17.696982 kubelet[2554]: I0302 12:59:17.696887 2554 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 2 12:59:17.782302 systemd[1]: Created slice kubepods-burstable-pod1615fc41_91d4_4d09_afc6_7512c37dc161.slice - libcontainer container kubepods-burstable-pod1615fc41_91d4_4d09_afc6_7512c37dc161.slice. Mar 2 12:59:17.789651 kubelet[2554]: I0302 12:59:17.789592 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l97b\" (UniqueName: \"kubernetes.io/projected/292cf7f8-5770-4cfe-98b8-b56cbdd122c1-kube-api-access-4l97b\") pod \"calico-apiserver-5bc544cbd4-7q7cm\" (UID: \"292cf7f8-5770-4cfe-98b8-b56cbdd122c1\") " pod="calico-system/calico-apiserver-5bc544cbd4-7q7cm" Mar 2 12:59:17.790373 kubelet[2554]: I0302 12:59:17.789677 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214b37e0-0ea7-495d-89ba-9790d04fdf36-config\") pod \"goldmane-9566f57b5-dclsc\" (UID: \"214b37e0-0ea7-495d-89ba-9790d04fdf36\") " pod="calico-system/goldmane-9566f57b5-dclsc" Mar 2 12:59:17.790373 kubelet[2554]: I0302 12:59:17.789705 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/292cf7f8-5770-4cfe-98b8-b56cbdd122c1-calico-apiserver-certs\") pod \"calico-apiserver-5bc544cbd4-7q7cm\" (UID: \"292cf7f8-5770-4cfe-98b8-b56cbdd122c1\") " pod="calico-system/calico-apiserver-5bc544cbd4-7q7cm" Mar 2 12:59:17.790373 kubelet[2554]: I0302 12:59:17.789726 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/214b37e0-0ea7-495d-89ba-9790d04fdf36-goldmane-key-pair\") pod \"goldmane-9566f57b5-dclsc\" (UID: \"214b37e0-0ea7-495d-89ba-9790d04fdf36\") " pod="calico-system/goldmane-9566f57b5-dclsc" Mar 2 12:59:17.790373 kubelet[2554]: I0302 12:59:17.789777 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctxn\" (UniqueName: \"kubernetes.io/projected/1615fc41-91d4-4d09-afc6-7512c37dc161-kube-api-access-8ctxn\") pod \"coredns-674b8bbfcf-bpfqk\" (UID: \"1615fc41-91d4-4d09-afc6-7512c37dc161\") " pod="kube-system/coredns-674b8bbfcf-bpfqk" Mar 2 12:59:17.790373 kubelet[2554]: I0302 12:59:17.789809 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86nl\" (UniqueName: \"kubernetes.io/projected/9d794842-cae6-42e3-92b8-3b3c0e54e550-kube-api-access-f86nl\") pod \"coredns-674b8bbfcf-w7cmz\" (UID: \"9d794842-cae6-42e3-92b8-3b3c0e54e550\") " pod="kube-system/coredns-674b8bbfcf-w7cmz" Mar 2 12:59:17.790582 kubelet[2554]: I0302 12:59:17.789831 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d794842-cae6-42e3-92b8-3b3c0e54e550-config-volume\") pod \"coredns-674b8bbfcf-w7cmz\" (UID: \"9d794842-cae6-42e3-92b8-3b3c0e54e550\") " pod="kube-system/coredns-674b8bbfcf-w7cmz" Mar 2 12:59:17.790582 kubelet[2554]: I0302 12:59:17.789851 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/214b37e0-0ea7-495d-89ba-9790d04fdf36-goldmane-ca-bundle\") pod \"goldmane-9566f57b5-dclsc\" (UID: \"214b37e0-0ea7-495d-89ba-9790d04fdf36\") " pod="calico-system/goldmane-9566f57b5-dclsc" Mar 2 12:59:17.790582 kubelet[2554]: I0302 12:59:17.789875 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gtx\" (UniqueName: \"kubernetes.io/projected/214b37e0-0ea7-495d-89ba-9790d04fdf36-kube-api-access-d7gtx\") pod \"goldmane-9566f57b5-dclsc\" (UID: \"214b37e0-0ea7-495d-89ba-9790d04fdf36\") " pod="calico-system/goldmane-9566f57b5-dclsc" Mar 2 12:59:17.790582 kubelet[2554]: I0302 12:59:17.789894 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1615fc41-91d4-4d09-afc6-7512c37dc161-config-volume\") pod \"coredns-674b8bbfcf-bpfqk\" (UID: \"1615fc41-91d4-4d09-afc6-7512c37dc161\") " pod="kube-system/coredns-674b8bbfcf-bpfqk" Mar 2 12:59:17.793609 systemd[1]: Created slice kubepods-besteffort-pod292cf7f8_5770_4cfe_98b8_b56cbdd122c1.slice - libcontainer container kubepods-besteffort-pod292cf7f8_5770_4cfe_98b8_b56cbdd122c1.slice. Mar 2 12:59:17.804620 systemd[1]: Created slice kubepods-burstable-pod9d794842_cae6_42e3_92b8_3b3c0e54e550.slice - libcontainer container kubepods-burstable-pod9d794842_cae6_42e3_92b8_3b3c0e54e550.slice. Mar 2 12:59:17.817351 systemd[1]: Created slice kubepods-besteffort-pod214b37e0_0ea7_495d_89ba_9790d04fdf36.slice - libcontainer container kubepods-besteffort-pod214b37e0_0ea7_495d_89ba_9790d04fdf36.slice. Mar 2 12:59:17.826103 systemd[1]: Created slice kubepods-besteffort-pod6ddf42e0_6cd1_4b95_8cfd_884ff77a512d.slice - libcontainer container kubepods-besteffort-pod6ddf42e0_6cd1_4b95_8cfd_884ff77a512d.slice. Mar 2 12:59:17.835718 systemd[1]: Created slice kubepods-besteffort-pod6ed2a71a_3c1f_4929_9f89_b7aec80e5c6d.slice - libcontainer container kubepods-besteffort-pod6ed2a71a_3c1f_4929_9f89_b7aec80e5c6d.slice. Mar 2 12:59:17.843922 systemd[1]: Created slice kubepods-besteffort-pod2f74b832_0faa_4b95_8876_eccbea5d41d7.slice - libcontainer container kubepods-besteffort-pod2f74b832_0faa_4b95_8876_eccbea5d41d7.slice. Mar 2 12:59:17.890699 kubelet[2554]: I0302 12:59:17.890267 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-nginx-config\") pod \"whisker-557c4f875b-4mvrb\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " pod="calico-system/whisker-557c4f875b-4mvrb" Mar 2 12:59:17.890699 kubelet[2554]: I0302 12:59:17.890325 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr9b8\" (UniqueName: \"kubernetes.io/projected/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-kube-api-access-vr9b8\") pod \"whisker-557c4f875b-4mvrb\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " pod="calico-system/whisker-557c4f875b-4mvrb" Mar 2 12:59:17.890699 kubelet[2554]: I0302 12:59:17.890360 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ddf42e0-6cd1-4b95-8cfd-884ff77a512d-tigera-ca-bundle\") pod \"calico-kube-controllers-74c4f95764-z2fkz\" (UID: \"6ddf42e0-6cd1-4b95-8cfd-884ff77a512d\") " pod="calico-system/calico-kube-controllers-74c4f95764-z2fkz" Mar 2 12:59:17.890699 kubelet[2554]: I0302 12:59:17.890377 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-ca-bundle\") pod \"whisker-557c4f875b-4mvrb\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " pod="calico-system/whisker-557c4f875b-4mvrb" Mar 2 12:59:17.890699 kubelet[2554]: I0302 12:59:17.890393 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rhk\" (UniqueName: \"kubernetes.io/projected/2f74b832-0faa-4b95-8876-eccbea5d41d7-kube-api-access-s5rhk\") pod \"calico-apiserver-5bc544cbd4-nx2cs\" (UID: \"2f74b832-0faa-4b95-8876-eccbea5d41d7\") " pod="calico-system/calico-apiserver-5bc544cbd4-nx2cs" Mar 2 12:59:17.891237 kubelet[2554]: I0302 12:59:17.890425 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-backend-key-pair\") pod \"whisker-557c4f875b-4mvrb\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " pod="calico-system/whisker-557c4f875b-4mvrb" Mar 2 12:59:17.891237 kubelet[2554]: I0302 12:59:17.890454 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7njs\" (UniqueName: \"kubernetes.io/projected/6ddf42e0-6cd1-4b95-8cfd-884ff77a512d-kube-api-access-j7njs\") pod \"calico-kube-controllers-74c4f95764-z2fkz\" (UID: \"6ddf42e0-6cd1-4b95-8cfd-884ff77a512d\") " pod="calico-system/calico-kube-controllers-74c4f95764-z2fkz" Mar 2 12:59:17.891237 kubelet[2554]: I0302 12:59:17.890471 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f74b832-0faa-4b95-8876-eccbea5d41d7-calico-apiserver-certs\") pod \"calico-apiserver-5bc544cbd4-nx2cs\" (UID: \"2f74b832-0faa-4b95-8876-eccbea5d41d7\") " pod="calico-system/calico-apiserver-5bc544cbd4-nx2cs" Mar 2 12:59:18.087392 kubelet[2554]: E0302 12:59:18.087254 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:18.089232 containerd[1473]: time="2026-03-02T12:59:18.087957671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfqk,Uid:1615fc41-91d4-4d09-afc6-7512c37dc161,Namespace:kube-system,Attempt:0,}" Mar 2 12:59:18.099049 containerd[1473]: time="2026-03-02T12:59:18.098930044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-7q7cm,Uid:292cf7f8-5770-4cfe-98b8-b56cbdd122c1,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:18.117400 kubelet[2554]: E0302 12:59:18.117312 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:18.118867 containerd[1473]: time="2026-03-02T12:59:18.118809022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w7cmz,Uid:9d794842-cae6-42e3-92b8-3b3c0e54e550,Namespace:kube-system,Attempt:0,}" Mar 2 12:59:18.124235 containerd[1473]: time="2026-03-02T12:59:18.122558788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-dclsc,Uid:214b37e0-0ea7-495d-89ba-9790d04fdf36,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:18.134865 containerd[1473]: time="2026-03-02T12:59:18.134773741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c4f95764-z2fkz,Uid:6ddf42e0-6cd1-4b95-8cfd-884ff77a512d,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:18.138822 containerd[1473]: time="2026-03-02T12:59:18.138792481Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 12:59:18.144984 containerd[1473]: time="2026-03-02T12:59:18.142482988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557c4f875b-4mvrb,Uid:6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:18.148779 containerd[1473]: time="2026-03-02T12:59:18.148712007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-nx2cs,Uid:2f74b832-0faa-4b95-8876-eccbea5d41d7,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:18.257658 containerd[1473]: time="2026-03-02T12:59:18.257560538Z" level=info msg="CreateContainer within sandbox \"f752d140361a640d0675c87950edf7ddcec40ec96fe695f9f9fbb76816b56f1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"117e14d3954c0d7aabf8d1762544a501734e63404c890063b7334a6a9cc2e37b\"" Mar 2 12:59:18.262145 containerd[1473]: time="2026-03-02T12:59:18.261979137Z" level=info msg="StartContainer for \"117e14d3954c0d7aabf8d1762544a501734e63404c890063b7334a6a9cc2e37b\"" Mar 2 12:59:18.344285 systemd[1]: Started cri-containerd-117e14d3954c0d7aabf8d1762544a501734e63404c890063b7334a6a9cc2e37b.scope - libcontainer container 117e14d3954c0d7aabf8d1762544a501734e63404c890063b7334a6a9cc2e37b. Mar 2 12:59:18.443858 containerd[1473]: time="2026-03-02T12:59:18.443658962Z" level=info msg="StartContainer for \"117e14d3954c0d7aabf8d1762544a501734e63404c890063b7334a6a9cc2e37b\" returns successfully" Mar 2 12:59:18.470328 containerd[1473]: time="2026-03-02T12:59:18.470216566Z" level=error msg="Failed to destroy network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.472364 containerd[1473]: time="2026-03-02T12:59:18.472274576Z" level=error msg="Failed to destroy network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.474539 containerd[1473]: time="2026-03-02T12:59:18.474473789Z" level=error msg="encountered an error cleaning up failed sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.474684 containerd[1473]: time="2026-03-02T12:59:18.474582171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-nx2cs,Uid:2f74b832-0faa-4b95-8876-eccbea5d41d7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.478491 containerd[1473]: time="2026-03-02T12:59:18.478449129Z" level=error msg="encountered an error cleaning up failed sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.478647 containerd[1473]: time="2026-03-02T12:59:18.478620187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c4f95764-z2fkz,Uid:6ddf42e0-6cd1-4b95-8cfd-884ff77a512d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.487339 kubelet[2554]: E0302 12:59:18.487165 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.487339 kubelet[2554]: E0302 12:59:18.487276 2554 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c4f95764-z2fkz" Mar 2 12:59:18.487339 kubelet[2554]: E0302 12:59:18.487273 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.487514 kubelet[2554]: E0302 12:59:18.487352 2554 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c4f95764-z2fkz" Mar 2 12:59:18.487514 kubelet[2554]: E0302 12:59:18.487395 2554 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bc544cbd4-nx2cs" Mar 2 12:59:18.487514 kubelet[2554]: E0302 12:59:18.487438 2554 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bc544cbd4-nx2cs" Mar 2 12:59:18.487589 kubelet[2554]: E0302 12:59:18.487437 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c4f95764-z2fkz_calico-system(6ddf42e0-6cd1-4b95-8cfd-884ff77a512d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c4f95764-z2fkz_calico-system(6ddf42e0-6cd1-4b95-8cfd-884ff77a512d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c4f95764-z2fkz" podUID="6ddf42e0-6cd1-4b95-8cfd-884ff77a512d" Mar 2 12:59:18.487676 kubelet[2554]: E0302 12:59:18.487516 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bc544cbd4-nx2cs_calico-system(2f74b832-0faa-4b95-8876-eccbea5d41d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bc544cbd4-nx2cs_calico-system(2f74b832-0faa-4b95-8876-eccbea5d41d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bc544cbd4-nx2cs" podUID="2f74b832-0faa-4b95-8876-eccbea5d41d7" Mar 2 12:59:18.491694 containerd[1473]: time="2026-03-02T12:59:18.491596798Z" level=error msg="Failed to destroy network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.495355 containerd[1473]: time="2026-03-02T12:59:18.495307217Z" level=error msg="encountered an error cleaning up failed sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.495740 containerd[1473]: time="2026-03-02T12:59:18.495475960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfqk,Uid:1615fc41-91d4-4d09-afc6-7512c37dc161,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.496157 kubelet[2554]: E0302 12:59:18.496056 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.496234 kubelet[2554]: E0302 12:59:18.496171 2554 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpfqk" Mar 2 12:59:18.496234 kubelet[2554]: E0302 12:59:18.496206 2554 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpfqk" Mar 2 12:59:18.496311 kubelet[2554]: E0302 12:59:18.496252 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bpfqk_kube-system(1615fc41-91d4-4d09-afc6-7512c37dc161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bpfqk_kube-system(1615fc41-91d4-4d09-afc6-7512c37dc161)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bpfqk" podUID="1615fc41-91d4-4d09-afc6-7512c37dc161" Mar 2 12:59:18.509516 containerd[1473]: time="2026-03-02T12:59:18.509451669Z" level=error msg="Failed to destroy network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.510342 containerd[1473]: time="2026-03-02T12:59:18.510267183Z" level=error msg="encountered an error cleaning up failed sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.510421 containerd[1473]: time="2026-03-02T12:59:18.510350297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-7q7cm,Uid:292cf7f8-5770-4cfe-98b8-b56cbdd122c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.510717 kubelet[2554]: E0302 12:59:18.510631 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.510717 kubelet[2554]: E0302 12:59:18.510699 2554 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bc544cbd4-7q7cm" Mar 2 12:59:18.510835 kubelet[2554]: E0302 12:59:18.510721 2554 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bc544cbd4-7q7cm" Mar 2 12:59:18.510835 kubelet[2554]: E0302 12:59:18.510762 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bc544cbd4-7q7cm_calico-system(292cf7f8-5770-4cfe-98b8-b56cbdd122c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bc544cbd4-7q7cm_calico-system(292cf7f8-5770-4cfe-98b8-b56cbdd122c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bc544cbd4-7q7cm" podUID="292cf7f8-5770-4cfe-98b8-b56cbdd122c1" Mar 2 12:59:18.514520 containerd[1473]: time="2026-03-02T12:59:18.514422295Z" level=error msg="Failed to destroy network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.515943 containerd[1473]: time="2026-03-02T12:59:18.514805087Z" level=error msg="Failed to destroy network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.515943 containerd[1473]: time="2026-03-02T12:59:18.515266833Z" level=error msg="encountered an error cleaning up failed sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.515943 containerd[1473]: time="2026-03-02T12:59:18.515389570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-dclsc,Uid:214b37e0-0ea7-495d-89ba-9790d04fdf36,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.515943 containerd[1473]: time="2026-03-02T12:59:18.515733418Z" level=error msg="encountered an error cleaning up failed sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.515943 containerd[1473]: time="2026-03-02T12:59:18.515804852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557c4f875b-4mvrb,Uid:6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.517525 kubelet[2554]: E0302 12:59:18.515622 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.517525 kubelet[2554]: E0302 12:59:18.515670 2554 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9566f57b5-dclsc" Mar 2 12:59:18.517525 kubelet[2554]: E0302 12:59:18.515696 2554 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9566f57b5-dclsc" Mar 2 12:59:18.517609 kubelet[2554]: E0302 12:59:18.515757 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9566f57b5-dclsc_calico-system(214b37e0-0ea7-495d-89ba-9790d04fdf36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9566f57b5-dclsc_calico-system(214b37e0-0ea7-495d-89ba-9790d04fdf36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9566f57b5-dclsc" podUID="214b37e0-0ea7-495d-89ba-9790d04fdf36" Mar 2 12:59:18.517677 kubelet[2554]: E0302 12:59:18.517637 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.517703 kubelet[2554]: E0302 12:59:18.517679 2554 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-557c4f875b-4mvrb" Mar 2 12:59:18.517724 kubelet[2554]: E0302 12:59:18.517706 2554 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-557c4f875b-4mvrb" Mar 2 12:59:18.517890 kubelet[2554]: E0302 12:59:18.517758 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-557c4f875b-4mvrb_calico-system(6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-557c4f875b-4mvrb_calico-system(6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-557c4f875b-4mvrb" podUID="6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d" Mar 2 12:59:18.529408 containerd[1473]: time="2026-03-02T12:59:18.529334082Z" level=error msg="Failed to destroy network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.529910 containerd[1473]: time="2026-03-02T12:59:18.529833298Z" level=error msg="encountered an error cleaning up failed sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.530050 containerd[1473]: time="2026-03-02T12:59:18.529927412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w7cmz,Uid:9d794842-cae6-42e3-92b8-3b3c0e54e550,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.530506 kubelet[2554]: E0302 12:59:18.530346 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:59:18.530506 kubelet[2554]: E0302 12:59:18.530428 2554 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w7cmz" Mar 2 12:59:18.530506 kubelet[2554]: E0302 12:59:18.530455 2554 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w7cmz" Mar 2 12:59:18.530647 kubelet[2554]: E0302 12:59:18.530522 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w7cmz_kube-system(9d794842-cae6-42e3-92b8-3b3c0e54e550)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w7cmz_kube-system(9d794842-cae6-42e3-92b8-3b3c0e54e550)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w7cmz" podUID="9d794842-cae6-42e3-92b8-3b3c0e54e550" Mar 2 12:59:18.750441 systemd[1]: Created slice kubepods-besteffort-podac16a752_3fe7_46bc_9e8d_01b34213f083.slice - libcontainer container kubepods-besteffort-podac16a752_3fe7_46bc_9e8d_01b34213f083.slice. Mar 2 12:59:18.771514 containerd[1473]: time="2026-03-02T12:59:18.770842216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w67n9,Uid:ac16a752-3fe7-46bc-9e8d-01b34213f083,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:19.124913 kubelet[2554]: I0302 12:59:19.124810 2554 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:19.127360 kubelet[2554]: I0302 12:59:19.125940 2554 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:19.129207 kubelet[2554]: I0302 12:59:19.129157 2554 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:19.129465 containerd[1473]: time="2026-03-02T12:59:19.129401351Z" level=info msg="StopPodSandbox for \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\"" Mar 2 12:59:19.129860 containerd[1473]: time="2026-03-02T12:59:19.129688087Z" level=info msg="StopPodSandbox for \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\"" Mar 2 12:59:19.129860 containerd[1473]: time="2026-03-02T12:59:19.129783090Z" level=info msg="StopPodSandbox for \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\"" Mar 2 12:59:19.131515 containerd[1473]: time="2026-03-02T12:59:19.131454283Z" level=info msg="Ensure that sandbox 6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea in task-service has been cleanup successfully" Mar 2 12:59:19.131650 containerd[1473]: time="2026-03-02T12:59:19.131521377Z" level=info msg="Ensure that sandbox 3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf in task-service has been cleanup successfully" Mar 2 12:59:19.132201 containerd[1473]: time="2026-03-02T12:59:19.131884672Z" level=info msg="Ensure that sandbox 232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2 in task-service has been cleanup successfully" Mar 2 12:59:19.148644 kubelet[2554]: I0302 12:59:19.147349 2554 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:19.150949 containerd[1473]: time="2026-03-02T12:59:19.149650631Z" level=info msg="StopPodSandbox for \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\"" Mar 2 12:59:19.150949 containerd[1473]: time="2026-03-02T12:59:19.149978308Z" level=info msg="Ensure that sandbox 3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110 in task-service has been cleanup successfully" Mar 2 12:59:19.156854 kubelet[2554]: I0302 12:59:19.155513 2554 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:19.158661 containerd[1473]: time="2026-03-02T12:59:19.158581031Z" level=info msg="StopPodSandbox for \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\"" Mar 2 12:59:19.158889 containerd[1473]: time="2026-03-02T12:59:19.158826337Z" level=info msg="Ensure that sandbox 2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f in task-service has been cleanup successfully" Mar 2 12:59:19.166239 kubelet[2554]: I0302 12:59:19.166147 2554 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:19.168829 containerd[1473]: time="2026-03-02T12:59:19.167969133Z" level=info msg="StopPodSandbox for \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\"" Mar 2 12:59:19.168829 containerd[1473]: time="2026-03-02T12:59:19.168285580Z" level=info msg="Ensure that sandbox f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9 in task-service has been cleanup successfully" Mar 2 12:59:19.171128 kubelet[2554]: I0302 12:59:19.170654 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gtq5m" podStartSLOduration=3.899298546 podStartE2EDuration="19.17063292s" podCreationTimestamp="2026-03-02 12:59:00 +0000 UTC" firstStartedPulling="2026-03-02 12:59:01.100990024 +0000 UTC m=+23.582672615" lastFinishedPulling="2026-03-02 12:59:16.3723244 +0000 UTC m=+38.854006989" observedRunningTime="2026-03-02 12:59:19.164491255 +0000 UTC m=+41.646173855" watchObservedRunningTime="2026-03-02 12:59:19.17063292 +0000 UTC m=+41.652315510" Mar 2 12:59:19.181514 kubelet[2554]: I0302 12:59:19.181480 2554 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:19.183132 containerd[1473]: time="2026-03-02T12:59:19.182910739Z" level=info msg="StopPodSandbox for \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\"" Mar 2 12:59:19.183472 containerd[1473]: time="2026-03-02T12:59:19.183208682Z" level=info msg="Ensure that sandbox d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785 in task-service has been cleanup successfully" Mar 2 12:59:19.254325 systemd-networkd[1387]: calif30ceb61840: Link UP Mar 2 12:59:19.255180 systemd-networkd[1387]: calif30ceb61840: Gained carrier Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.843 [ERROR][3683] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.881 [INFO][3683] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w67n9-eth0 csi-node-driver- calico-system ac16a752-3fe7-46bc-9e8d-01b34213f083 784 0 2026-03-02 12:59:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7494d65b57 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w67n9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif30ceb61840 [] [] }} ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.882 [INFO][3683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-eth0" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.939 [INFO][3705] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" HandleID="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Workload="localhost-k8s-csi--node--driver--w67n9-eth0" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.951 [INFO][3705] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" HandleID="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Workload="localhost-k8s-csi--node--driver--w67n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011b9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w67n9", "timestamp":"2026-03-02 12:59:18.939577872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002709a0)} Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.951 [INFO][3705] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.951 [INFO][3705] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.951 [INFO][3705] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.957 [INFO][3705] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:18.967 [INFO][3705] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.064 [INFO][3705] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.078 [INFO][3705] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.088 [INFO][3705] ipam/ipam.go 588: Found unclaimed block in 9.687052ms host="localhost" subnet=192.168.88.128/26 Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.088 [INFO][3705] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.096 [INFO][3705] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.099 [INFO][3705] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.099 [INFO][3705] ipam/ipam_block_reader_writer.go 202: Existing affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.099 [INFO][3705] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.102 [INFO][3705] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.102 [INFO][3705] ipam/ipam.go 623: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.102 [INFO][3705] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.104 [INFO][3705] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727 Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.110 [INFO][3705] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.119 [INFO][3705] ipam/ipam.go 1276: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.339781 containerd[1473]: 2026-03-02 12:59:19.180 [INFO][3705] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.186 [INFO][3705] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727 Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.196 [INFO][3705] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.209 [INFO][3705] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.209 [INFO][3705] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" host="localhost" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.209 [INFO][3705] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.209 [INFO][3705] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" HandleID="k8s-pod-network.7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Workload="localhost-k8s-csi--node--driver--w67n9-eth0" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.238 [INFO][3683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w67n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac16a752-3fe7-46bc-9e8d-01b34213f083", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w67n9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif30ceb61840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.238 [INFO][3683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-eth0" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.238 [INFO][3683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif30ceb61840 ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-eth0" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.255 [INFO][3683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-eth0" Mar 2 12:59:19.343342 containerd[1473]: 2026-03-02 12:59:19.256 [INFO][3683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w67n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac16a752-3fe7-46bc-9e8d-01b34213f083", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727", Pod:"csi-node-driver-w67n9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif30ceb61840", MAC:"42:b6:d7:58:5e:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:19.343666 containerd[1473]: 2026-03-02 12:59:19.303 [INFO][3683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727" Namespace="calico-system" Pod="csi-node-driver-w67n9" WorkloadEndpoint="localhost-k8s-csi--node--driver--w67n9-eth0" Mar 2 12:59:19.406548 containerd[1473]: time="2026-03-02T12:59:19.406129684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:19.406548 containerd[1473]: time="2026-03-02T12:59:19.406290261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:19.406548 containerd[1473]: time="2026-03-02T12:59:19.406302224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:19.408859 containerd[1473]: time="2026-03-02T12:59:19.406406748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:19.506333 systemd[1]: Started cri-containerd-7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727.scope - libcontainer container 7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727. Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.403 [INFO][3751] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.407 [INFO][3751] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" iface="eth0" netns="/var/run/netns/cni-74d48094-9e86-804e-8613-13db087f4962" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.409 [INFO][3751] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" iface="eth0" netns="/var/run/netns/cni-74d48094-9e86-804e-8613-13db087f4962" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.422 [INFO][3751] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" iface="eth0" netns="/var/run/netns/cni-74d48094-9e86-804e-8613-13db087f4962" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.422 [INFO][3751] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.422 [INFO][3751] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.553 [INFO][3919] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.558 [INFO][3919] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.558 [INFO][3919] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.569 [WARNING][3919] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.570 [INFO][3919] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.578 [INFO][3919] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.604173 containerd[1473]: 2026-03-02 12:59:19.590 [INFO][3751] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:19.603402 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:19.609745 containerd[1473]: time="2026-03-02T12:59:19.609625965Z" level=info msg="TearDown network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\" successfully" Mar 2 12:59:19.609745 containerd[1473]: time="2026-03-02T12:59:19.609672491Z" level=info msg="StopPodSandbox for \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\" returns successfully" Mar 2 12:59:19.610924 containerd[1473]: time="2026-03-02T12:59:19.610898377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-nx2cs,Uid:2f74b832-0faa-4b95-8876-eccbea5d41d7,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:19.659306 containerd[1473]: time="2026-03-02T12:59:19.658534007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w67n9,Uid:ac16a752-3fe7-46bc-9e8d-01b34213f083,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727\"" Mar 2 12:59:19.665963 systemd[1]: run-netns-cni\x2d74d48094\x2d9e86\x2d804e\x2d8613\x2d13db087f4962.mount: Deactivated successfully. Mar 2 12:59:19.673173 containerd[1473]: time="2026-03-02T12:59:19.672789437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.415 [INFO][3744] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.421 [INFO][3744] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" iface="eth0" netns="/var/run/netns/cni-3b5d90a0-655b-1d55-b688-911aa03c032a" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.421 [INFO][3744] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" iface="eth0" netns="/var/run/netns/cni-3b5d90a0-655b-1d55-b688-911aa03c032a" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.422 [INFO][3744] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" iface="eth0" netns="/var/run/netns/cni-3b5d90a0-655b-1d55-b688-911aa03c032a" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.422 [INFO][3744] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.422 [INFO][3744] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.600 [INFO][3912] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.605 [INFO][3912] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.605 [INFO][3912] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.617 [WARNING][3912] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.617 [INFO][3912] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.620 [INFO][3912] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.679234 containerd[1473]: 2026-03-02 12:59:19.645 [INFO][3744] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.393 [INFO][3746] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.394 [INFO][3746] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" iface="eth0" netns="/var/run/netns/cni-7222a8f2-633c-6826-ea71-0de4ff5b9ed0" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.398 [INFO][3746] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" iface="eth0" netns="/var/run/netns/cni-7222a8f2-633c-6826-ea71-0de4ff5b9ed0" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.402 [INFO][3746] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" iface="eth0" netns="/var/run/netns/cni-7222a8f2-633c-6826-ea71-0de4ff5b9ed0" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.402 [INFO][3746] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.402 [INFO][3746] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.606 [INFO][3899] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.608 [INFO][3899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.620 [INFO][3899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.636 [WARNING][3899] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.636 [INFO][3899] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.640 [INFO][3899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.687155 containerd[1473]: 2026-03-02 12:59:19.650 [INFO][3746] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:19.685876 systemd[1]: run-netns-cni\x2d3b5d90a0\x2d655b\x2d1d55\x2db688\x2d911aa03c032a.mount: Deactivated successfully. Mar 2 12:59:19.688420 containerd[1473]: time="2026-03-02T12:59:19.687455289Z" level=info msg="TearDown network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\" successfully" Mar 2 12:59:19.688420 containerd[1473]: time="2026-03-02T12:59:19.687520230Z" level=info msg="StopPodSandbox for \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\" returns successfully" Mar 2 12:59:19.689199 containerd[1473]: time="2026-03-02T12:59:19.688898108Z" level=info msg="TearDown network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\" successfully" Mar 2 12:59:19.689199 containerd[1473]: time="2026-03-02T12:59:19.688928355Z" level=info msg="StopPodSandbox for \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\" returns successfully" Mar 2 12:59:19.693121 kubelet[2554]: E0302 12:59:19.689635 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:19.692259 systemd[1]: run-netns-cni\x2d7222a8f2\x2d633c\x2d6826\x2dea71\x2d0de4ff5b9ed0.mount: Deactivated successfully. Mar 2 12:59:19.693297 containerd[1473]: time="2026-03-02T12:59:19.689642987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c4f95764-z2fkz,Uid:6ddf42e0-6cd1-4b95-8cfd-884ff77a512d,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:19.693553 containerd[1473]: time="2026-03-02T12:59:19.693523296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfqk,Uid:1615fc41-91d4-4d09-afc6-7512c37dc161,Namespace:kube-system,Attempt:1,}" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.467 [INFO][3807] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.467 [INFO][3807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" iface="eth0" netns="/var/run/netns/cni-24d4f520-a730-d29e-6811-d037d27feeab" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.467 [INFO][3807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" iface="eth0" netns="/var/run/netns/cni-24d4f520-a730-d29e-6811-d037d27feeab" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.473 [INFO][3807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" iface="eth0" netns="/var/run/netns/cni-24d4f520-a730-d29e-6811-d037d27feeab" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.473 [INFO][3807] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.473 [INFO][3807] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.621 [INFO][3931] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.624 [INFO][3931] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.642 [INFO][3931] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.654 [WARNING][3931] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.654 [INFO][3931] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.661 [INFO][3931] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.699136 containerd[1473]: 2026-03-02 12:59:19.676 [INFO][3807] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:19.701980 systemd[1]: run-netns-cni\x2d24d4f520\x2da730\x2dd29e\x2d6811\x2dd037d27feeab.mount: Deactivated successfully. Mar 2 12:59:19.704973 containerd[1473]: time="2026-03-02T12:59:19.704945125Z" level=info msg="TearDown network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\" successfully" Mar 2 12:59:19.705196 containerd[1473]: time="2026-03-02T12:59:19.705136530Z" level=info msg="StopPodSandbox for \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\" returns successfully" Mar 2 12:59:19.707303 kubelet[2554]: E0302 12:59:19.706958 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:19.709631 containerd[1473]: time="2026-03-02T12:59:19.707955534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w7cmz,Uid:9d794842-cae6-42e3-92b8-3b3c0e54e550,Namespace:kube-system,Attempt:1,}" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.467 [INFO][3810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.467 [INFO][3810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" iface="eth0" netns="/var/run/netns/cni-8baa3ed1-3e02-b4f4-3a0c-ed91ceb61307" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.468 [INFO][3810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" iface="eth0" netns="/var/run/netns/cni-8baa3ed1-3e02-b4f4-3a0c-ed91ceb61307" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.470 [INFO][3810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" iface="eth0" netns="/var/run/netns/cni-8baa3ed1-3e02-b4f4-3a0c-ed91ceb61307" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.470 [INFO][3810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.471 [INFO][3810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.656 [INFO][3932] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.657 [INFO][3932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.661 [INFO][3932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.680 [WARNING][3932] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.680 [INFO][3932] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.692 [INFO][3932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.722192 containerd[1473]: 2026-03-02 12:59:19.711 [INFO][3810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:19.725744 containerd[1473]: time="2026-03-02T12:59:19.724761571Z" level=info msg="TearDown network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\" successfully" Mar 2 12:59:19.725744 containerd[1473]: time="2026-03-02T12:59:19.724788460Z" level=info msg="StopPodSandbox for \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\" returns successfully" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.534 [INFO][3834] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.535 [INFO][3834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" iface="eth0" netns="/var/run/netns/cni-234d629f-48cb-e48d-f685-6a536bdd5f01" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.536 [INFO][3834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" iface="eth0" netns="/var/run/netns/cni-234d629f-48cb-e48d-f685-6a536bdd5f01" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.541 [INFO][3834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" iface="eth0" netns="/var/run/netns/cni-234d629f-48cb-e48d-f685-6a536bdd5f01" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.542 [INFO][3834] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.542 [INFO][3834] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.684 [INFO][3948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.684 [INFO][3948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.693 [INFO][3948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.705 [WARNING][3948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.706 [INFO][3948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.711 [INFO][3948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.725744 containerd[1473]: 2026-03-02 12:59:19.718 [INFO][3834] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:19.726313 containerd[1473]: time="2026-03-02T12:59:19.726144656Z" level=info msg="TearDown network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\" successfully" Mar 2 12:59:19.726313 containerd[1473]: time="2026-03-02T12:59:19.726171796Z" level=info msg="StopPodSandbox for \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\" returns successfully" Mar 2 12:59:19.727595 containerd[1473]: time="2026-03-02T12:59:19.727332732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-7q7cm,Uid:292cf7f8-5770-4cfe-98b8-b56cbdd122c1,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.591 [INFO][3860] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.591 [INFO][3860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" iface="eth0" netns="/var/run/netns/cni-f02e1eaa-d121-7dbe-2dc7-fcc74ff0397b" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.591 [INFO][3860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" iface="eth0" netns="/var/run/netns/cni-f02e1eaa-d121-7dbe-2dc7-fcc74ff0397b" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.592 [INFO][3860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" iface="eth0" netns="/var/run/netns/cni-f02e1eaa-d121-7dbe-2dc7-fcc74ff0397b" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.592 [INFO][3860] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.592 [INFO][3860] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.702 [INFO][3962] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.702 [INFO][3962] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.711 [INFO][3962] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.728 [WARNING][3962] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.728 [INFO][3962] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.733 [INFO][3962] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:19.744793 containerd[1473]: 2026-03-02 12:59:19.736 [INFO][3860] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:19.747300 containerd[1473]: time="2026-03-02T12:59:19.745677688Z" level=info msg="TearDown network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\" successfully" Mar 2 12:59:19.747300 containerd[1473]: time="2026-03-02T12:59:19.746701207Z" level=info msg="StopPodSandbox for \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\" returns successfully" Mar 2 12:59:19.749118 containerd[1473]: time="2026-03-02T12:59:19.748971834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-dclsc,Uid:214b37e0-0ea7-495d-89ba-9790d04fdf36,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:19.820959 kubelet[2554]: I0302 12:59:19.818486 2554 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr9b8\" (UniqueName: \"kubernetes.io/projected/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-kube-api-access-vr9b8\") pod \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " Mar 2 12:59:19.820959 kubelet[2554]: I0302 12:59:19.818563 2554 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-nginx-config\") pod \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " Mar 2 12:59:19.820959 kubelet[2554]: I0302 12:59:19.818692 2554 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-backend-key-pair\") pod \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " Mar 2 12:59:19.820959 kubelet[2554]: I0302 12:59:19.818733 2554 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-ca-bundle\") pod \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\" (UID: \"6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d\") " Mar 2 12:59:19.820959 kubelet[2554]: I0302 12:59:19.820318 2554 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d" (UID: "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:59:19.824584 kubelet[2554]: I0302 12:59:19.823901 2554 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d" (UID: "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:59:19.827897 kubelet[2554]: I0302 12:59:19.827859 2554 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-kube-api-access-vr9b8" (OuterVolumeSpecName: "kube-api-access-vr9b8") pod "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d" (UID: "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d"). InnerVolumeSpecName "kube-api-access-vr9b8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 12:59:19.834865 kubelet[2554]: I0302 12:59:19.834763 2554 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d" (UID: "6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 12:59:19.919761 kubelet[2554]: I0302 12:59:19.919566 2554 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vr9b8\" (UniqueName: \"kubernetes.io/projected/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-kube-api-access-vr9b8\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:19.919761 kubelet[2554]: I0302 12:59:19.919648 2554 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:19.919761 kubelet[2554]: I0302 12:59:19.919667 2554 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:19.919761 kubelet[2554]: I0302 12:59:19.919680 2554 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:20.076957 systemd-networkd[1387]: cali8adb6701f7a: Link UP Mar 2 12:59:20.077731 systemd-networkd[1387]: cali8adb6701f7a: Gained carrier Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.770 [ERROR][3980] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.800 [INFO][3980] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0 calico-apiserver-5bc544cbd4- calico-system 2f74b832-0faa-4b95-8876-eccbea5d41d7 976 0 2026-03-02 12:58:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bc544cbd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bc544cbd4-nx2cs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8adb6701f7a [] [] }} ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.800 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.961 [INFO][4045] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" HandleID="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.978 [INFO][4045] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" HandleID="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5bc544cbd4-nx2cs", "timestamp":"2026-03-02 12:59:19.961896641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000cac60)} Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.978 [INFO][4045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.978 [INFO][4045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.978 [INFO][4045] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:19.984 [INFO][4045] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.011 [INFO][4045] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.032 [INFO][4045] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.040 [INFO][4045] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.043 [INFO][4045] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.043 [INFO][4045] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.048 [INFO][4045] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4 Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.055 [INFO][4045] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.069 [INFO][4045] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.069 [INFO][4045] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" host="localhost" Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.069 [INFO][4045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:20.100414 containerd[1473]: 2026-03-02 12:59:20.070 [INFO][4045] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" HandleID="k8s-pod-network.903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:20.101610 containerd[1473]: 2026-03-02 12:59:20.074 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"2f74b832-0faa-4b95-8876-eccbea5d41d7", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bc544cbd4-nx2cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8adb6701f7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.101610 containerd[1473]: 2026-03-02 12:59:20.074 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:20.101610 containerd[1473]: 2026-03-02 12:59:20.074 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8adb6701f7a ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:20.101610 containerd[1473]: 2026-03-02 12:59:20.078 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:20.101610 containerd[1473]: 2026-03-02 12:59:20.078 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"2f74b832-0faa-4b95-8876-eccbea5d41d7", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4", Pod:"calico-apiserver-5bc544cbd4-nx2cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8adb6701f7a", MAC:"ee:81:ef:66:e6:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.101610 containerd[1473]: 2026-03-02 12:59:20.095 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-nx2cs" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:20.135913 containerd[1473]: time="2026-03-02T12:59:20.135680121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:20.135913 containerd[1473]: time="2026-03-02T12:59:20.135776419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:20.135913 containerd[1473]: time="2026-03-02T12:59:20.135797640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.136376 containerd[1473]: time="2026-03-02T12:59:20.136298750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.171245 systemd[1]: Started cri-containerd-903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4.scope - libcontainer container 903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4. Mar 2 12:59:20.195103 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:20.201276 systemd[1]: Removed slice kubepods-besteffort-pod6ed2a71a_3c1f_4929_9f89_b7aec80e5c6d.slice - libcontainer container kubepods-besteffort-pod6ed2a71a_3c1f_4929_9f89_b7aec80e5c6d.slice. Mar 2 12:59:20.233195 systemd-networkd[1387]: cali3c9aea08971: Link UP Mar 2 12:59:20.234662 systemd-networkd[1387]: cali3c9aea08971: Gained carrier Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:19.804 [ERROR][4002] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:19.849 [INFO][4002] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0 calico-kube-controllers-74c4f95764- calico-system 6ddf42e0-6cd1-4b95-8cfd-884ff77a512d 975 0 2026-03-02 12:59:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74c4f95764 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74c4f95764-z2fkz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3c9aea08971 [] [] }} ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:19.849 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:19.965 [INFO][4073] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" HandleID="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:19.984 [INFO][4073] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" HandleID="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049ae00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74c4f95764-z2fkz", "timestamp":"2026-03-02 12:59:19.965195461 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036f340)} Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:19.984 [INFO][4073] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.070 [INFO][4073] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.070 [INFO][4073] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.085 [INFO][4073] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.100 [INFO][4073] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.121 [INFO][4073] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.125 [INFO][4073] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.130 [INFO][4073] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.130 [INFO][4073] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.133 [INFO][4073] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4 Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.195 [INFO][4073] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.217 [INFO][4073] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.217 [INFO][4073] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" host="localhost" Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.217 [INFO][4073] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:20.270700 containerd[1473]: 2026-03-02 12:59:20.218 [INFO][4073] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" HandleID="k8s-pod-network.bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:20.272389 containerd[1473]: 2026-03-02 12:59:20.229 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0", GenerateName:"calico-kube-controllers-74c4f95764-", Namespace:"calico-system", SelfLink:"", UID:"6ddf42e0-6cd1-4b95-8cfd-884ff77a512d", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c4f95764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74c4f95764-z2fkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c9aea08971", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.272389 containerd[1473]: 2026-03-02 12:59:20.230 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:20.272389 containerd[1473]: 2026-03-02 12:59:20.230 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c9aea08971 ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:20.272389 containerd[1473]: 2026-03-02 12:59:20.237 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:20.272389 containerd[1473]: 2026-03-02 12:59:20.237 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0", GenerateName:"calico-kube-controllers-74c4f95764-", Namespace:"calico-system", SelfLink:"", UID:"6ddf42e0-6cd1-4b95-8cfd-884ff77a512d", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c4f95764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4", Pod:"calico-kube-controllers-74c4f95764-z2fkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c9aea08971", MAC:"72:d3:1a:6f:e7:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.272389 containerd[1473]: 2026-03-02 12:59:20.262 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4" Namespace="calico-system" Pod="calico-kube-controllers-74c4f95764-z2fkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:20.274984 containerd[1473]: time="2026-03-02T12:59:20.274349271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-nx2cs,Uid:2f74b832-0faa-4b95-8876-eccbea5d41d7,Namespace:calico-system,Attempt:1,} returns sandbox id \"903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4\"" Mar 2 12:59:20.317718 containerd[1473]: time="2026-03-02T12:59:20.317578438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:20.317870 containerd[1473]: time="2026-03-02T12:59:20.317741891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:20.317870 containerd[1473]: time="2026-03-02T12:59:20.317754996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.318798 containerd[1473]: time="2026-03-02T12:59:20.318311579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.338255 systemd[1]: Created slice kubepods-besteffort-podffff6692_fe51_4dae_b36d_bd885e06fb78.slice - libcontainer container kubepods-besteffort-podffff6692_fe51_4dae_b36d_bd885e06fb78.slice. Mar 2 12:59:20.385334 systemd-networkd[1387]: cali83dc4ae15b6: Link UP Mar 2 12:59:20.385701 systemd-networkd[1387]: cali83dc4ae15b6: Gained carrier Mar 2 12:59:20.388068 systemd[1]: Started cri-containerd-bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4.scope - libcontainer container bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4. Mar 2 12:59:20.428700 kubelet[2554]: I0302 12:59:20.427744 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ffff6692-fe51-4dae-b36d-bd885e06fb78-nginx-config\") pod \"whisker-6d7f459564-nfkfg\" (UID: \"ffff6692-fe51-4dae-b36d-bd885e06fb78\") " pod="calico-system/whisker-6d7f459564-nfkfg" Mar 2 12:59:20.428700 kubelet[2554]: I0302 12:59:20.427795 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffff6692-fe51-4dae-b36d-bd885e06fb78-whisker-ca-bundle\") pod \"whisker-6d7f459564-nfkfg\" (UID: \"ffff6692-fe51-4dae-b36d-bd885e06fb78\") " pod="calico-system/whisker-6d7f459564-nfkfg" Mar 2 12:59:20.428700 kubelet[2554]: I0302 12:59:20.427827 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffff6692-fe51-4dae-b36d-bd885e06fb78-whisker-backend-key-pair\") pod \"whisker-6d7f459564-nfkfg\" (UID: \"ffff6692-fe51-4dae-b36d-bd885e06fb78\") " pod="calico-system/whisker-6d7f459564-nfkfg" Mar 2 12:59:20.428700 kubelet[2554]: I0302 12:59:20.427855 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnch2\" (UniqueName: \"kubernetes.io/projected/ffff6692-fe51-4dae-b36d-bd885e06fb78-kube-api-access-bnch2\") pod \"whisker-6d7f459564-nfkfg\" (UID: \"ffff6692-fe51-4dae-b36d-bd885e06fb78\") " pod="calico-system/whisker-6d7f459564-nfkfg" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:19.843 [ERROR][4010] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:19.869 [INFO][4010] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0 coredns-674b8bbfcf- kube-system 1615fc41-91d4-4d09-afc6-7512c37dc161 974 0 2026-03-02 12:58:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bpfqk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83dc4ae15b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:19.869 [INFO][4010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:19.998 [INFO][4075] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" HandleID="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.021 [INFO][4075] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" HandleID="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bpfqk", "timestamp":"2026-03-02 12:59:19.998948824 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001a2dc0)} Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.027 [INFO][4075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.218 [INFO][4075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.218 [INFO][4075] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.226 [INFO][4075] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.240 [INFO][4075] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.259 [INFO][4075] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.267 [INFO][4075] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.275 [INFO][4075] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.275 [INFO][4075] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.289 [INFO][4075] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.312 [INFO][4075] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.349 [INFO][4075] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.349 [INFO][4075] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" host="localhost" Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.349 [INFO][4075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:20.429387 containerd[1473]: 2026-03-02 12:59:20.349 [INFO][4075] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" HandleID="k8s-pod-network.80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:20.431213 containerd[1473]: 2026-03-02 12:59:20.364 [INFO][4010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1615fc41-91d4-4d09-afc6-7512c37dc161", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bpfqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83dc4ae15b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.431213 containerd[1473]: 2026-03-02 12:59:20.368 [INFO][4010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:20.431213 containerd[1473]: 2026-03-02 12:59:20.372 [INFO][4010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83dc4ae15b6 ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:20.431213 containerd[1473]: 2026-03-02 12:59:20.377 [INFO][4010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:20.431213 containerd[1473]: 2026-03-02 12:59:20.378 [INFO][4010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1615fc41-91d4-4d09-afc6-7512c37dc161", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac", Pod:"coredns-674b8bbfcf-bpfqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83dc4ae15b6", MAC:"0a:f8:99:02:2e:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.431213 containerd[1473]: 2026-03-02 12:59:20.404 [INFO][4010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfqk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:20.462337 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:20.536798 containerd[1473]: time="2026-03-02T12:59:20.519204011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:20.536798 containerd[1473]: time="2026-03-02T12:59:20.519300400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:20.536798 containerd[1473]: time="2026-03-02T12:59:20.519321039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.536798 containerd[1473]: time="2026-03-02T12:59:20.519467299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.560286 systemd-networkd[1387]: caliacf5f7fd8a6: Link UP Mar 2 12:59:20.563472 systemd-networkd[1387]: caliacf5f7fd8a6: Gained carrier Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:19.869 [ERROR][4029] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:19.894 [INFO][4029] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0 calico-apiserver-5bc544cbd4- calico-system 292cf7f8-5770-4cfe-98b8-b56cbdd122c1 979 0 2026-03-02 12:58:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bc544cbd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bc544cbd4-7q7cm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliacf5f7fd8a6 [] [] }} ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:19.894 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.006 [INFO][4089] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" HandleID="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.033 [INFO][4089] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" HandleID="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005105e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5bc544cbd4-7q7cm", "timestamp":"2026-03-02 12:59:20.006619963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003d5ce0)} Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.033 [INFO][4089] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.349 [INFO][4089] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.349 [INFO][4089] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.366 [INFO][4089] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.384 [INFO][4089] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.421 [INFO][4089] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.424 [INFO][4089] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.430 [INFO][4089] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.434 [INFO][4089] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.449 [INFO][4089] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8 Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.468 [INFO][4089] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.497 [INFO][4089] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.498 [INFO][4089] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" host="localhost" Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.498 [INFO][4089] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:20.624989 containerd[1473]: 2026-03-02 12:59:20.498 [INFO][4089] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" HandleID="k8s-pod-network.b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:20.626298 containerd[1473]: 2026-03-02 12:59:20.552 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"292cf7f8-5770-4cfe-98b8-b56cbdd122c1", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bc544cbd4-7q7cm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliacf5f7fd8a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.626298 containerd[1473]: 2026-03-02 12:59:20.553 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:20.626298 containerd[1473]: 2026-03-02 12:59:20.553 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacf5f7fd8a6 ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:20.626298 containerd[1473]: 2026-03-02 12:59:20.566 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:20.626298 containerd[1473]: 2026-03-02 12:59:20.569 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"292cf7f8-5770-4cfe-98b8-b56cbdd122c1", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8", Pod:"calico-apiserver-5bc544cbd4-7q7cm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliacf5f7fd8a6", MAC:"5a:1f:ef:42:7a:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.626298 containerd[1473]: 2026-03-02 12:59:20.605 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8" Namespace="calico-system" Pod="calico-apiserver-5bc544cbd4-7q7cm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:20.644296 systemd[1]: Started cri-containerd-80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac.scope - libcontainer container 80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac. Mar 2 12:59:20.665506 containerd[1473]: time="2026-03-02T12:59:20.665457896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7f459564-nfkfg,Uid:ffff6692-fe51-4dae-b36d-bd885e06fb78,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:20.683669 containerd[1473]: time="2026-03-02T12:59:20.683571719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c4f95764-z2fkz,Uid:6ddf42e0-6cd1-4b95-8cfd-884ff77a512d,Namespace:calico-system,Attempt:1,} returns sandbox id \"bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4\"" Mar 2 12:59:20.688383 systemd[1]: run-netns-cni\x2d8baa3ed1\x2d3e02\x2db4f4\x2d3a0c\x2ded91ceb61307.mount: Deactivated successfully. Mar 2 12:59:20.691214 systemd[1]: run-netns-cni\x2df02e1eaa\x2dd121\x2d7dbe\x2d2dc7\x2dfcc74ff0397b.mount: Deactivated successfully. Mar 2 12:59:20.691288 systemd[1]: run-netns-cni\x2d234d629f\x2d48cb\x2de48d\x2df685\x2d6a536bdd5f01.mount: Deactivated successfully. Mar 2 12:59:20.691356 systemd[1]: var-lib-kubelet-pods-6ed2a71a\x2d3c1f\x2d4929\x2d9f89\x2db7aec80e5c6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvr9b8.mount: Deactivated successfully. Mar 2 12:59:20.691457 systemd[1]: var-lib-kubelet-pods-6ed2a71a\x2d3c1f\x2d4929\x2d9f89\x2db7aec80e5c6d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 12:59:20.731856 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:20.735712 systemd-networkd[1387]: cali0943502b6da: Link UP Mar 2 12:59:20.738813 systemd-networkd[1387]: cali0943502b6da: Gained carrier Mar 2 12:59:20.788421 containerd[1473]: time="2026-03-02T12:59:20.786657581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:20.788421 containerd[1473]: time="2026-03-02T12:59:20.786739735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:20.788421 containerd[1473]: time="2026-03-02T12:59:20.786779999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.788421 containerd[1473]: time="2026-03-02T12:59:20.786889031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:19.930 [ERROR][4052] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:19.962 [INFO][4052] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9566f57b5--dclsc-eth0 goldmane-9566f57b5- calico-system 214b37e0-0ea7-495d-89ba-9790d04fdf36 980 0 2026-03-02 12:58:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9566f57b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9566f57b5-dclsc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0943502b6da [] [] }} ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:19.963 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.024 [INFO][4103] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" HandleID="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.041 [INFO][4103] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" HandleID="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000369cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9566f57b5-dclsc", "timestamp":"2026-03-02 12:59:20.024595591 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000207080)} Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.043 [INFO][4103] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.500 [INFO][4103] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.500 [INFO][4103] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.529 [INFO][4103] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.579 [INFO][4103] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.603 [INFO][4103] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.611 [INFO][4103] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.626 [INFO][4103] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.626 [INFO][4103] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.635 [INFO][4103] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2 Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.653 [INFO][4103] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.709 [INFO][4103] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.709 [INFO][4103] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" host="localhost" Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.709 [INFO][4103] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:20.823855 containerd[1473]: 2026-03-02 12:59:20.709 [INFO][4103] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" HandleID="k8s-pod-network.11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:20.824847 containerd[1473]: 2026-03-02 12:59:20.719 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--dclsc-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"214b37e0-0ea7-495d-89ba-9790d04fdf36", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9566f57b5-dclsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0943502b6da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.824847 containerd[1473]: 2026-03-02 12:59:20.719 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:20.824847 containerd[1473]: 2026-03-02 12:59:20.720 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0943502b6da ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:20.824847 containerd[1473]: 2026-03-02 12:59:20.741 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:20.824847 containerd[1473]: 2026-03-02 12:59:20.757 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--dclsc-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"214b37e0-0ea7-495d-89ba-9790d04fdf36", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2", Pod:"goldmane-9566f57b5-dclsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0943502b6da", MAC:"62:46:63:f0:03:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:20.824847 containerd[1473]: 2026-03-02 12:59:20.811 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2" Namespace="calico-system" Pod="goldmane-9566f57b5-dclsc" WorkloadEndpoint="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:20.861787 containerd[1473]: time="2026-03-02T12:59:20.861733414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfqk,Uid:1615fc41-91d4-4d09-afc6-7512c37dc161,Namespace:kube-system,Attempt:1,} returns sandbox id \"80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac\"" Mar 2 12:59:20.865648 kubelet[2554]: E0302 12:59:20.865612 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:20.867766 systemd[1]: Started cri-containerd-b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8.scope - libcontainer container b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8. Mar 2 12:59:20.909836 containerd[1473]: time="2026-03-02T12:59:20.909739866Z" level=info msg="CreateContainer within sandbox \"80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:59:20.910885 containerd[1473]: time="2026-03-02T12:59:20.906474775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:20.910885 containerd[1473]: time="2026-03-02T12:59:20.906557658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:20.910885 containerd[1473]: time="2026-03-02T12:59:20.906577656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.910885 containerd[1473]: time="2026-03-02T12:59:20.906683843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:20.915726 systemd-networkd[1387]: caliba903407163: Link UP Mar 2 12:59:20.917565 systemd-networkd[1387]: caliba903407163: Gained carrier Mar 2 12:59:20.931860 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:20.972684 systemd[1]: Started cri-containerd-11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2.scope - libcontainer container 11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2. Mar 2 12:59:20.978650 containerd[1473]: time="2026-03-02T12:59:20.978521432Z" level=info msg="CreateContainer within sandbox \"80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c007c47de245862559c5172f234a97f8b93b20fa474fdf278e74886be7646e3\"" Mar 2 12:59:20.994562 containerd[1473]: time="2026-03-02T12:59:20.994527597Z" level=info msg="StartContainer for \"4c007c47de245862559c5172f234a97f8b93b20fa474fdf278e74886be7646e3\"" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:19.934 [ERROR][4025] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:19.969 [INFO][4025] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0 coredns-674b8bbfcf- kube-system 9d794842-cae6-42e3-92b8-3b3c0e54e550 978 0 2026-03-02 12:58:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-w7cmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba903407163 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:19.969 [INFO][4025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.053 [INFO][4105] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" HandleID="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.067 [INFO][4105] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" HandleID="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059fd80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-w7cmz", "timestamp":"2026-03-02 12:59:20.053487809 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000bc420)} Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.068 [INFO][4105] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.711 [INFO][4105] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.711 [INFO][4105] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.724 [INFO][4105] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.742 [INFO][4105] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.766 [INFO][4105] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.775 [INFO][4105] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.787 [INFO][4105] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.787 [INFO][4105] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.807 [INFO][4105] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.825 [INFO][4105] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.851 [INFO][4105] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.857 [INFO][4105] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" host="localhost" Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.858 [INFO][4105] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:21.001140 containerd[1473]: 2026-03-02 12:59:20.858 [INFO][4105] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" HandleID="k8s-pod-network.12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:21.002270 containerd[1473]: 2026-03-02 12:59:20.893 [INFO][4025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d794842-cae6-42e3-92b8-3b3c0e54e550", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-w7cmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba903407163", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:21.002270 containerd[1473]: 2026-03-02 12:59:20.894 [INFO][4025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:21.002270 containerd[1473]: 2026-03-02 12:59:20.897 [INFO][4025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba903407163 ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:21.002270 containerd[1473]: 2026-03-02 12:59:20.923 [INFO][4025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:21.002270 containerd[1473]: 2026-03-02 12:59:20.935 [INFO][4025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d794842-cae6-42e3-92b8-3b3c0e54e550", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad", Pod:"coredns-674b8bbfcf-w7cmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba903407163", MAC:"d2:b2:be:20:e4:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:21.002270 containerd[1473]: 2026-03-02 12:59:20.959 [INFO][4025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-w7cmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:21.057857 containerd[1473]: time="2026-03-02T12:59:21.057728032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc544cbd4-7q7cm,Uid:292cf7f8-5770-4cfe-98b8-b56cbdd122c1,Namespace:calico-system,Attempt:1,} returns sandbox id \"b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8\"" Mar 2 12:59:21.064797 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:21.082318 systemd[1]: Started cri-containerd-4c007c47de245862559c5172f234a97f8b93b20fa474fdf278e74886be7646e3.scope - libcontainer container 4c007c47de245862559c5172f234a97f8b93b20fa474fdf278e74886be7646e3. Mar 2 12:59:21.087818 containerd[1473]: time="2026-03-02T12:59:21.086058034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:21.087818 containerd[1473]: time="2026-03-02T12:59:21.086278022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:21.087818 containerd[1473]: time="2026-03-02T12:59:21.086305323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:21.087818 containerd[1473]: time="2026-03-02T12:59:21.086986839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:21.120249 systemd[1]: Started cri-containerd-12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad.scope - libcontainer container 12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad. Mar 2 12:59:21.150580 containerd[1473]: time="2026-03-02T12:59:21.150532879Z" level=info msg="StartContainer for \"4c007c47de245862559c5172f234a97f8b93b20fa474fdf278e74886be7646e3\" returns successfully" Mar 2 12:59:21.168629 containerd[1473]: time="2026-03-02T12:59:21.168558931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-dclsc,Uid:214b37e0-0ea7-495d-89ba-9790d04fdf36,Namespace:calico-system,Attempt:1,} returns sandbox id \"11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2\"" Mar 2 12:59:21.168726 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:21.213818 kubelet[2554]: E0302 12:59:21.213534 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:21.235241 containerd[1473]: time="2026-03-02T12:59:21.234510473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w7cmz,Uid:9d794842-cae6-42e3-92b8-3b3c0e54e550,Namespace:kube-system,Attempt:1,} returns sandbox id \"12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad\"" Mar 2 12:59:21.235305 systemd-networkd[1387]: calif30ceb61840: Gained IPv6LL Mar 2 12:59:21.239109 kubelet[2554]: E0302 12:59:21.238818 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:21.246954 containerd[1473]: time="2026-03-02T12:59:21.246894859Z" level=info msg="CreateContainer within sandbox \"12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:59:21.275672 systemd-networkd[1387]: cali9f52efc31c0: Link UP Mar 2 12:59:21.282958 systemd-networkd[1387]: cali9f52efc31c0: Gained carrier Mar 2 12:59:21.291247 containerd[1473]: time="2026-03-02T12:59:21.291167821Z" level=info msg="CreateContainer within sandbox \"12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8d7dc90f4eea955798e615ade07f2a5f3985fd7d0d28c30b64e887331ff16f1\"" Mar 2 12:59:21.295587 containerd[1473]: time="2026-03-02T12:59:21.295558699Z" level=info msg="StartContainer for \"c8d7dc90f4eea955798e615ade07f2a5f3985fd7d0d28c30b64e887331ff16f1\"" Mar 2 12:59:21.318452 kubelet[2554]: I0302 12:59:21.318296 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bpfqk" podStartSLOduration=39.318260397 podStartE2EDuration="39.318260397s" podCreationTimestamp="2026-03-02 12:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:59:21.253413598 +0000 UTC m=+43.735096188" watchObservedRunningTime="2026-03-02 12:59:21.318260397 +0000 UTC m=+43.799943008" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:20.961 [ERROR][4376] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:20.992 [INFO][4376] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d7f459564--nfkfg-eth0 whisker-6d7f459564- calico-system ffff6692-fe51-4dae-b36d-bd885e06fb78 1010 0 2026-03-02 12:59:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d7f459564 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d7f459564-nfkfg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9f52efc31c0 [] [] }} ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:20.992 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.145 [INFO][4511] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" HandleID="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Workload="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.163 [INFO][4511] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" HandleID="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Workload="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000283e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d7f459564-nfkfg", "timestamp":"2026-03-02 12:59:21.14399307 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038d080)} Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.163 [INFO][4511] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.163 [INFO][4511] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.163 [INFO][4511] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.173 [INFO][4511] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.188 [INFO][4511] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.195 [INFO][4511] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.197 [INFO][4511] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.200 [INFO][4511] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.200 [INFO][4511] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.204 [INFO][4511] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.219 [INFO][4511] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.247 [INFO][4511] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.247 [INFO][4511] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" host="localhost" Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.247 [INFO][4511] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:21.334197 containerd[1473]: 2026-03-02 12:59:21.247 [INFO][4511] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" HandleID="k8s-pod-network.deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Workload="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" Mar 2 12:59:21.335147 containerd[1473]: 2026-03-02 12:59:21.258 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d7f459564--nfkfg-eth0", GenerateName:"whisker-6d7f459564-", Namespace:"calico-system", SelfLink:"", UID:"ffff6692-fe51-4dae-b36d-bd885e06fb78", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d7f459564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d7f459564-nfkfg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f52efc31c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:21.335147 containerd[1473]: 2026-03-02 12:59:21.258 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" Mar 2 12:59:21.335147 containerd[1473]: 2026-03-02 12:59:21.258 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f52efc31c0 ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" Mar 2 12:59:21.335147 containerd[1473]: 2026-03-02 12:59:21.285 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" Mar 2 12:59:21.335147 containerd[1473]: 2026-03-02 12:59:21.289 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d7f459564--nfkfg-eth0", GenerateName:"whisker-6d7f459564-", Namespace:"calico-system", SelfLink:"", UID:"ffff6692-fe51-4dae-b36d-bd885e06fb78", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d7f459564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a", Pod:"whisker-6d7f459564-nfkfg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f52efc31c0", MAC:"56:f6:d1:a3:5f:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:21.335147 containerd[1473]: 2026-03-02 12:59:21.317 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a" Namespace="calico-system" Pod="whisker-6d7f459564-nfkfg" WorkloadEndpoint="localhost-k8s-whisker--6d7f459564--nfkfg-eth0" Mar 2 12:59:21.367259 systemd[1]: Started cri-containerd-c8d7dc90f4eea955798e615ade07f2a5f3985fd7d0d28c30b64e887331ff16f1.scope - libcontainer container c8d7dc90f4eea955798e615ade07f2a5f3985fd7d0d28c30b64e887331ff16f1. Mar 2 12:59:21.421386 containerd[1473]: time="2026-03-02T12:59:21.420880132Z" level=info msg="StartContainer for \"c8d7dc90f4eea955798e615ade07f2a5f3985fd7d0d28c30b64e887331ff16f1\" returns successfully" Mar 2 12:59:21.435667 containerd[1473]: time="2026-03-02T12:59:21.435606374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:21.437806 containerd[1473]: time="2026-03-02T12:59:21.437737663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8793087" Mar 2 12:59:21.442321 containerd[1473]: time="2026-03-02T12:59:21.442261506Z" level=info msg="ImageCreate event name:\"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:21.450093 containerd[1473]: time="2026-03-02T12:59:21.447412874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:21.450093 containerd[1473]: time="2026-03-02T12:59:21.449920943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"10349132\" in 1.777092783s" Mar 2 12:59:21.450093 containerd[1473]: time="2026-03-02T12:59:21.449953934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\"" Mar 2 12:59:21.450294 containerd[1473]: time="2026-03-02T12:59:21.448830026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:21.450294 containerd[1473]: time="2026-03-02T12:59:21.448939549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:21.450294 containerd[1473]: time="2026-03-02T12:59:21.448958955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:21.450294 containerd[1473]: time="2026-03-02T12:59:21.449313364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:21.452838 containerd[1473]: time="2026-03-02T12:59:21.452803046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:59:21.458970 containerd[1473]: time="2026-03-02T12:59:21.458929225Z" level=info msg="CreateContainer within sandbox \"7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 12:59:21.507164 containerd[1473]: time="2026-03-02T12:59:21.505859617Z" level=info msg="CreateContainer within sandbox \"7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"483ed646e6d388bead4ee68e1f565e6fe231438c8420017057abf4dadb258cd0\"" Mar 2 12:59:21.508453 containerd[1473]: time="2026-03-02T12:59:21.508321176Z" level=info msg="StartContainer for \"483ed646e6d388bead4ee68e1f565e6fe231438c8420017057abf4dadb258cd0\"" Mar 2 12:59:21.512337 systemd[1]: Started cri-containerd-deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a.scope - libcontainer container deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a. Mar 2 12:59:21.541806 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:21.560257 systemd[1]: Started cri-containerd-483ed646e6d388bead4ee68e1f565e6fe231438c8420017057abf4dadb258cd0.scope - libcontainer container 483ed646e6d388bead4ee68e1f565e6fe231438c8420017057abf4dadb258cd0. Mar 2 12:59:21.591602 containerd[1473]: time="2026-03-02T12:59:21.591480728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7f459564-nfkfg,Uid:ffff6692-fe51-4dae-b36d-bd885e06fb78,Namespace:calico-system,Attempt:0,} returns sandbox id \"deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a\"" Mar 2 12:59:21.620196 systemd-networkd[1387]: cali8adb6701f7a: Gained IPv6LL Mar 2 12:59:21.630779 containerd[1473]: time="2026-03-02T12:59:21.630706935Z" level=info msg="StartContainer for \"483ed646e6d388bead4ee68e1f565e6fe231438c8420017057abf4dadb258cd0\" returns successfully" Mar 2 12:59:21.669899 systemd[1]: run-containerd-runc-k8s.io-b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8-runc.oK90gQ.mount: Deactivated successfully. Mar 2 12:59:21.746668 kubelet[2554]: I0302 12:59:21.745276 2554 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d" path="/var/lib/kubelet/pods/6ed2a71a-3c1f-4929-9f89-b7aec80e5c6d/volumes" Mar 2 12:59:21.746120 systemd-networkd[1387]: caliacf5f7fd8a6: Gained IPv6LL Mar 2 12:59:22.129332 systemd-networkd[1387]: cali83dc4ae15b6: Gained IPv6LL Mar 2 12:59:22.194246 systemd-networkd[1387]: cali0943502b6da: Gained IPv6LL Mar 2 12:59:22.252604 kubelet[2554]: E0302 12:59:22.252511 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:22.260206 kubelet[2554]: E0302 12:59:22.259096 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:22.259354 systemd-networkd[1387]: cali3c9aea08971: Gained IPv6LL Mar 2 12:59:22.303407 kubelet[2554]: I0302 12:59:22.303228 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w7cmz" podStartSLOduration=40.303202871 podStartE2EDuration="40.303202871s" podCreationTimestamp="2026-03-02 12:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:59:22.302849143 +0000 UTC m=+44.784531734" watchObservedRunningTime="2026-03-02 12:59:22.303202871 +0000 UTC m=+44.784885461" Mar 2 12:59:22.513225 systemd-networkd[1387]: caliba903407163: Gained IPv6LL Mar 2 12:59:22.609836 kubelet[2554]: I0302 12:59:22.608492 2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:59:22.609836 kubelet[2554]: E0302 12:59:22.608864 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:22.887381 containerd[1473]: time="2026-03-02T12:59:22.887234106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:22.889324 containerd[1473]: time="2026-03-02T12:59:22.889275885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=48403149" Mar 2 12:59:22.891185 containerd[1473]: time="2026-03-02T12:59:22.891115878Z" level=info msg="ImageCreate event name:\"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:22.896419 containerd[1473]: time="2026-03-02T12:59:22.896243504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:22.897614 containerd[1473]: time="2026-03-02T12:59:22.897561241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 1.444605291s" Mar 2 12:59:22.897614 containerd[1473]: time="2026-03-02T12:59:22.897604883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:59:22.900281 containerd[1473]: time="2026-03-02T12:59:22.900251739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 12:59:22.905697 containerd[1473]: time="2026-03-02T12:59:22.905661351Z" level=info msg="CreateContainer within sandbox \"903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:59:23.006896 containerd[1473]: time="2026-03-02T12:59:23.006704303Z" level=info msg="CreateContainer within sandbox \"903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"61c91a16cd59b86c6f62dcabc9a33d2f8053719d17c080e4d5f7a22d7cf78288\"" Mar 2 12:59:23.009252 containerd[1473]: time="2026-03-02T12:59:23.008123351Z" level=info msg="StartContainer for \"61c91a16cd59b86c6f62dcabc9a33d2f8053719d17c080e4d5f7a22d7cf78288\"" Mar 2 12:59:23.066660 systemd[1]: Started cri-containerd-61c91a16cd59b86c6f62dcabc9a33d2f8053719d17c080e4d5f7a22d7cf78288.scope - libcontainer container 61c91a16cd59b86c6f62dcabc9a33d2f8053719d17c080e4d5f7a22d7cf78288. Mar 2 12:59:23.090790 systemd-networkd[1387]: cali9f52efc31c0: Gained IPv6LL Mar 2 12:59:23.158926 containerd[1473]: time="2026-03-02T12:59:23.158649887Z" level=info msg="StartContainer for \"61c91a16cd59b86c6f62dcabc9a33d2f8053719d17c080e4d5f7a22d7cf78288\" returns successfully" Mar 2 12:59:23.257901 kubelet[2554]: E0302 12:59:23.257822 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:23.262044 kubelet[2554]: E0302 12:59:23.261922 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:23.264089 kubelet[2554]: E0302 12:59:23.263830 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:23.277478 kubelet[2554]: I0302 12:59:23.277412 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bc544cbd4-nx2cs" podStartSLOduration=21.657571258 podStartE2EDuration="24.277392868s" podCreationTimestamp="2026-03-02 12:58:59 +0000 UTC" firstStartedPulling="2026-03-02 12:59:20.279891041 +0000 UTC m=+42.761573630" lastFinishedPulling="2026-03-02 12:59:22.89971265 +0000 UTC m=+45.381395240" observedRunningTime="2026-03-02 12:59:23.276626674 +0000 UTC m=+45.758309294" watchObservedRunningTime="2026-03-02 12:59:23.277392868 +0000 UTC m=+45.759075458" Mar 2 12:59:23.359154 kernel: calico-node[4787]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 12:59:24.197620 systemd-networkd[1387]: vxlan.calico: Link UP Mar 2 12:59:24.199162 systemd-networkd[1387]: vxlan.calico: Gained carrier Mar 2 12:59:24.262838 kubelet[2554]: I0302 12:59:24.261598 2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:59:24.262838 kubelet[2554]: E0302 12:59:24.262471 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:25.267452 kubelet[2554]: E0302 12:59:25.267412 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:25.329530 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Mar 2 12:59:26.065464 containerd[1473]: time="2026-03-02T12:59:26.065360492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:26.067210 containerd[1473]: time="2026-03-02T12:59:26.066987610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=52396348" Mar 2 12:59:26.070118 containerd[1473]: time="2026-03-02T12:59:26.069937793Z" level=info msg="ImageCreate event name:\"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:26.075992 containerd[1473]: time="2026-03-02T12:59:26.075857569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:26.077103 containerd[1473]: time="2026-03-02T12:59:26.076910224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"53952361\" in 3.176142666s" Mar 2 12:59:26.077103 containerd[1473]: time="2026-03-02T12:59:26.076992948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\"" Mar 2 12:59:26.085733 containerd[1473]: time="2026-03-02T12:59:26.084966540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:59:26.124763 containerd[1473]: time="2026-03-02T12:59:26.124702787Z" level=info msg="CreateContainer within sandbox \"bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 12:59:26.164463 containerd[1473]: time="2026-03-02T12:59:26.164371908Z" level=info msg="CreateContainer within sandbox \"bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a71d95086f11b782fee78fb3eb8c9f2c1f946fd8c00bd8f01f0f9fd85c309ae4\"" Mar 2 12:59:26.165510 containerd[1473]: time="2026-03-02T12:59:26.165400264Z" level=info msg="StartContainer for \"a71d95086f11b782fee78fb3eb8c9f2c1f946fd8c00bd8f01f0f9fd85c309ae4\"" Mar 2 12:59:26.221253 systemd[1]: Started cri-containerd-a71d95086f11b782fee78fb3eb8c9f2c1f946fd8c00bd8f01f0f9fd85c309ae4.scope - libcontainer container a71d95086f11b782fee78fb3eb8c9f2c1f946fd8c00bd8f01f0f9fd85c309ae4. Mar 2 12:59:26.237292 containerd[1473]: time="2026-03-02T12:59:26.237194957Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:26.239917 containerd[1473]: time="2026-03-02T12:59:26.239852326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 12:59:26.243771 containerd[1473]: time="2026-03-02T12:59:26.243680948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 158.563367ms" Mar 2 12:59:26.243771 containerd[1473]: time="2026-03-02T12:59:26.243735479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:59:26.246812 containerd[1473]: time="2026-03-02T12:59:26.246604537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 12:59:26.257301 containerd[1473]: time="2026-03-02T12:59:26.257133697Z" level=info msg="CreateContainer within sandbox \"b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:59:26.328668 containerd[1473]: time="2026-03-02T12:59:26.328138963Z" level=info msg="StartContainer for \"a71d95086f11b782fee78fb3eb8c9f2c1f946fd8c00bd8f01f0f9fd85c309ae4\" returns successfully" Mar 2 12:59:26.385668 containerd[1473]: time="2026-03-02T12:59:26.384457420Z" level=info msg="CreateContainer within sandbox \"b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff6f171a3b3f28a07271c4df704477a83290e6b0f75f8ebdb83f4a7efc6acc3c\"" Mar 2 12:59:26.398137 containerd[1473]: time="2026-03-02T12:59:26.396147038Z" level=info msg="StartContainer for \"ff6f171a3b3f28a07271c4df704477a83290e6b0f75f8ebdb83f4a7efc6acc3c\"" Mar 2 12:59:26.475427 systemd[1]: Started cri-containerd-ff6f171a3b3f28a07271c4df704477a83290e6b0f75f8ebdb83f4a7efc6acc3c.scope - libcontainer container ff6f171a3b3f28a07271c4df704477a83290e6b0f75f8ebdb83f4a7efc6acc3c. Mar 2 12:59:26.613584 containerd[1473]: time="2026-03-02T12:59:26.612957543Z" level=info msg="StartContainer for \"ff6f171a3b3f28a07271c4df704477a83290e6b0f75f8ebdb83f4a7efc6acc3c\" returns successfully" Mar 2 12:59:27.377694 kubelet[2554]: I0302 12:59:27.377144 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74c4f95764-z2fkz" podStartSLOduration=21.98029052 podStartE2EDuration="27.377118632s" podCreationTimestamp="2026-03-02 12:59:00 +0000 UTC" firstStartedPulling="2026-03-02 12:59:20.687830705 +0000 UTC m=+43.169513294" lastFinishedPulling="2026-03-02 12:59:26.084658816 +0000 UTC m=+48.566341406" observedRunningTime="2026-03-02 12:59:27.374854173 +0000 UTC m=+49.856536773" watchObservedRunningTime="2026-03-02 12:59:27.377118632 +0000 UTC m=+49.858801232" Mar 2 12:59:27.553921 kubelet[2554]: I0302 12:59:27.553730 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bc544cbd4-7q7cm" podStartSLOduration=23.371979211 podStartE2EDuration="28.553704685s" podCreationTimestamp="2026-03-02 12:58:59 +0000 UTC" firstStartedPulling="2026-03-02 12:59:21.063897603 +0000 UTC m=+43.545580194" lastFinishedPulling="2026-03-02 12:59:26.245623068 +0000 UTC m=+48.727305668" observedRunningTime="2026-03-02 12:59:27.401444922 +0000 UTC m=+49.883127542" watchObservedRunningTime="2026-03-02 12:59:27.553704685 +0000 UTC m=+50.035387276" Mar 2 12:59:28.165991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256780084.mount: Deactivated successfully. Mar 2 12:59:28.354361 kubelet[2554]: I0302 12:59:28.354169 2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:59:29.127519 containerd[1473]: time="2026-03-02T12:59:29.127413401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:29.131433 containerd[1473]: time="2026-03-02T12:59:29.131263194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=55607954" Mar 2 12:59:29.133570 containerd[1473]: time="2026-03-02T12:59:29.133470352Z" level=info msg="ImageCreate event name:\"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:29.172706 containerd[1473]: time="2026-03-02T12:59:29.172594140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:29.174260 containerd[1473]: time="2026-03-02T12:59:29.174188193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"55607800\" in 2.927530578s" Mar 2 12:59:29.174260 containerd[1473]: time="2026-03-02T12:59:29.174231123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\"" Mar 2 12:59:29.176517 containerd[1473]: time="2026-03-02T12:59:29.176397941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 12:59:29.183285 containerd[1473]: time="2026-03-02T12:59:29.183149444Z" level=info msg="CreateContainer within sandbox \"11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 12:59:29.220604 containerd[1473]: time="2026-03-02T12:59:29.220385381Z" level=info msg="CreateContainer within sandbox \"11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"74c9c84f1173e1468aa3435e5df48b235d67af49dff2726571686045584672e1\"" Mar 2 12:59:29.223276 containerd[1473]: time="2026-03-02T12:59:29.222312246Z" level=info msg="StartContainer for \"74c9c84f1173e1468aa3435e5df48b235d67af49dff2726571686045584672e1\"" Mar 2 12:59:29.333288 systemd[1]: Started cri-containerd-74c9c84f1173e1468aa3435e5df48b235d67af49dff2726571686045584672e1.scope - libcontainer container 74c9c84f1173e1468aa3435e5df48b235d67af49dff2726571686045584672e1. Mar 2 12:59:29.404685 containerd[1473]: time="2026-03-02T12:59:29.404455834Z" level=info msg="StartContainer for \"74c9c84f1173e1468aa3435e5df48b235d67af49dff2726571686045584672e1\" returns successfully" Mar 2 12:59:30.024643 containerd[1473]: time="2026-03-02T12:59:30.024458593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:30.027605 containerd[1473]: time="2026-03-02T12:59:30.027519812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=6036825" Mar 2 12:59:30.029864 containerd[1473]: time="2026-03-02T12:59:30.029793808Z" level=info msg="ImageCreate event name:\"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:30.033274 containerd[1473]: time="2026-03-02T12:59:30.033211579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:30.047857 containerd[1473]: time="2026-03-02T12:59:30.047774101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7592862\" in 871.338139ms" Mar 2 12:59:30.047857 containerd[1473]: time="2026-03-02T12:59:30.047815809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\"" Mar 2 12:59:30.051948 containerd[1473]: time="2026-03-02T12:59:30.050659377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 12:59:30.059912 containerd[1473]: time="2026-03-02T12:59:30.059715457Z" level=info msg="CreateContainer within sandbox \"deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 12:59:30.151175 containerd[1473]: time="2026-03-02T12:59:30.150957369Z" level=info msg="CreateContainer within sandbox \"deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3da38943287fced93d25e632864bfbc5c360ff7d1f601a39955f328886802f6a\"" Mar 2 12:59:30.155339 containerd[1473]: time="2026-03-02T12:59:30.155195868Z" level=info msg="StartContainer for \"3da38943287fced93d25e632864bfbc5c360ff7d1f601a39955f328886802f6a\"" Mar 2 12:59:30.204612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426031423.mount: Deactivated successfully. Mar 2 12:59:30.227876 systemd[1]: Started cri-containerd-3da38943287fced93d25e632864bfbc5c360ff7d1f601a39955f328886802f6a.scope - libcontainer container 3da38943287fced93d25e632864bfbc5c360ff7d1f601a39955f328886802f6a. Mar 2 12:59:30.304904 containerd[1473]: time="2026-03-02T12:59:30.304709207Z" level=info msg="StartContainer for \"3da38943287fced93d25e632864bfbc5c360ff7d1f601a39955f328886802f6a\" returns successfully" Mar 2 12:59:30.703096 kubelet[2554]: I0302 12:59:30.702448 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-9566f57b5-dclsc" podStartSLOduration=23.700380232 podStartE2EDuration="31.702428589s" podCreationTimestamp="2026-03-02 12:58:59 +0000 UTC" firstStartedPulling="2026-03-02 12:59:21.173678007 +0000 UTC m=+43.655360597" lastFinishedPulling="2026-03-02 12:59:29.175726364 +0000 UTC m=+51.657408954" observedRunningTime="2026-03-02 12:59:30.466314054 +0000 UTC m=+52.947996654" watchObservedRunningTime="2026-03-02 12:59:30.702428589 +0000 UTC m=+53.184111178" Mar 2 12:59:31.159621 containerd[1473]: time="2026-03-02T12:59:31.159485123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:31.161548 containerd[1473]: time="2026-03-02T12:59:31.160890726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=14702266" Mar 2 12:59:31.167216 containerd[1473]: time="2026-03-02T12:59:31.167127921Z" level=info msg="ImageCreate event name:\"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:31.171867 containerd[1473]: time="2026-03-02T12:59:31.171768860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:31.173493 containerd[1473]: time="2026-03-02T12:59:31.173399482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"16258263\" in 1.122617588s" Mar 2 12:59:31.173567 containerd[1473]: time="2026-03-02T12:59:31.173476576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\"" Mar 2 12:59:31.175984 containerd[1473]: time="2026-03-02T12:59:31.175474283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 12:59:31.181731 containerd[1473]: time="2026-03-02T12:59:31.181381758Z" level=info msg="CreateContainer within sandbox \"7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 12:59:31.207591 containerd[1473]: time="2026-03-02T12:59:31.207489376Z" level=info msg="CreateContainer within sandbox \"7b83214bd51a0c69c78545fa9822feab93b8a9ecad53648b9b4a5c4c03f03727\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4fe5533f750554d2cbb065681d70c211fd8439664f67c8d04fa93cebe95465d4\"" Mar 2 12:59:31.208509 containerd[1473]: time="2026-03-02T12:59:31.208358573Z" level=info msg="StartContainer for \"4fe5533f750554d2cbb065681d70c211fd8439664f67c8d04fa93cebe95465d4\"" Mar 2 12:59:31.310298 systemd[1]: Started cri-containerd-4fe5533f750554d2cbb065681d70c211fd8439664f67c8d04fa93cebe95465d4.scope - libcontainer container 4fe5533f750554d2cbb065681d70c211fd8439664f67c8d04fa93cebe95465d4. Mar 2 12:59:31.352881 containerd[1473]: time="2026-03-02T12:59:31.352757129Z" level=info msg="StartContainer for \"4fe5533f750554d2cbb065681d70c211fd8439664f67c8d04fa93cebe95465d4\" returns successfully" Mar 2 12:59:31.481167 kubelet[2554]: I0302 12:59:31.480823 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w67n9" podStartSLOduration=19.977758322 podStartE2EDuration="31.480799358s" podCreationTimestamp="2026-03-02 12:59:00 +0000 UTC" firstStartedPulling="2026-03-02 12:59:19.672241229 +0000 UTC m=+42.153923820" lastFinishedPulling="2026-03-02 12:59:31.175282267 +0000 UTC m=+53.656964856" observedRunningTime="2026-03-02 12:59:31.4759233 +0000 UTC m=+53.957605890" watchObservedRunningTime="2026-03-02 12:59:31.480799358 +0000 UTC m=+53.962481958" Mar 2 12:59:31.950728 kubelet[2554]: I0302 12:59:31.950609 2554 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 12:59:31.954384 kubelet[2554]: I0302 12:59:31.953917 2554 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 12:59:32.215931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585642335.mount: Deactivated successfully. Mar 2 12:59:32.305929 containerd[1473]: time="2026-03-02T12:59:32.305799165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:32.307297 containerd[1473]: time="2026-03-02T12:59:32.307215274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=17599119" Mar 2 12:59:32.308974 containerd[1473]: time="2026-03-02T12:59:32.308900451Z" level=info msg="ImageCreate event name:\"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:32.312988 containerd[1473]: time="2026-03-02T12:59:32.312860214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:32.314133 containerd[1473]: time="2026-03-02T12:59:32.313981108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"17598949\" in 1.138463324s" Mar 2 12:59:32.314230 containerd[1473]: time="2026-03-02T12:59:32.314138221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\"" Mar 2 12:59:32.341334 containerd[1473]: time="2026-03-02T12:59:32.341277306Z" level=info msg="CreateContainer within sandbox \"deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 12:59:32.360780 containerd[1473]: time="2026-03-02T12:59:32.360683979Z" level=info msg="CreateContainer within sandbox \"deb4a95885cc6384ba1070efd952cc4ca66e40f62937be92f72debe6c140338a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"966abb3b11cee05cc02091568a9bac39171699ec69aafdaa0caeed603ebfe769\"" Mar 2 12:59:32.363228 containerd[1473]: time="2026-03-02T12:59:32.362219565Z" level=info msg="StartContainer for \"966abb3b11cee05cc02091568a9bac39171699ec69aafdaa0caeed603ebfe769\"" Mar 2 12:59:32.440389 systemd[1]: Started cri-containerd-966abb3b11cee05cc02091568a9bac39171699ec69aafdaa0caeed603ebfe769.scope - libcontainer container 966abb3b11cee05cc02091568a9bac39171699ec69aafdaa0caeed603ebfe769. Mar 2 12:59:32.505372 containerd[1473]: time="2026-03-02T12:59:32.504350288Z" level=info msg="StartContainer for \"966abb3b11cee05cc02091568a9bac39171699ec69aafdaa0caeed603ebfe769\" returns successfully" Mar 2 12:59:33.350364 kubelet[2554]: I0302 12:59:33.350239 2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 12:59:33.483915 kubelet[2554]: I0302 12:59:33.483750 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d7f459564-nfkfg" podStartSLOduration=2.762803747 podStartE2EDuration="13.483728134s" podCreationTimestamp="2026-03-02 12:59:20 +0000 UTC" firstStartedPulling="2026-03-02 12:59:21.594590033 +0000 UTC m=+44.076272824" lastFinishedPulling="2026-03-02 12:59:32.31551462 +0000 UTC m=+54.797197211" observedRunningTime="2026-03-02 12:59:33.482993625 +0000 UTC m=+55.964676216" watchObservedRunningTime="2026-03-02 12:59:33.483728134 +0000 UTC m=+55.965410723" Mar 2 12:59:37.700075 containerd[1473]: time="2026-03-02T12:59:37.699922864Z" level=info msg="StopPodSandbox for \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\"" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.836 [WARNING][5353] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--dclsc-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"214b37e0-0ea7-495d-89ba-9790d04fdf36", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2", Pod:"goldmane-9566f57b5-dclsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0943502b6da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.837 [INFO][5353] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.837 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" iface="eth0" netns="" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.837 [INFO][5353] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.837 [INFO][5353] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.931 [INFO][5361] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.932 [INFO][5361] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.932 [INFO][5361] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.946 [WARNING][5361] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.946 [INFO][5361] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.951 [INFO][5361] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:37.960232 containerd[1473]: 2026-03-02 12:59:37.955 [INFO][5353] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:37.971058 containerd[1473]: time="2026-03-02T12:59:37.970889778Z" level=info msg="TearDown network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\" successfully" Mar 2 12:59:37.971228 containerd[1473]: time="2026-03-02T12:59:37.970984486Z" level=info msg="StopPodSandbox for \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\" returns successfully" Mar 2 12:59:37.996755 containerd[1473]: time="2026-03-02T12:59:37.996674230Z" level=info msg="RemovePodSandbox for \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\"" Mar 2 12:59:37.999125 containerd[1473]: time="2026-03-02T12:59:37.998991844Z" level=info msg="Forcibly stopping sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\"" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.059 [WARNING][5379] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9566f57b5--dclsc-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"214b37e0-0ea7-495d-89ba-9790d04fdf36", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11fad34898cf6d68d808de23ae33cbdf7fdc230fb6d278367ee0491c761989d2", Pod:"goldmane-9566f57b5-dclsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0943502b6da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.060 [INFO][5379] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.060 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" iface="eth0" netns="" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.060 [INFO][5379] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.060 [INFO][5379] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.092 [INFO][5387] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.092 [INFO][5387] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.092 [INFO][5387] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.101 [WARNING][5387] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.101 [INFO][5387] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" HandleID="k8s-pod-network.2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Workload="localhost-k8s-goldmane--9566f57b5--dclsc-eth0" Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.103 [INFO][5387] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:38.111628 containerd[1473]: 2026-03-02 12:59:38.107 [INFO][5379] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f" Mar 2 12:59:38.111628 containerd[1473]: time="2026-03-02T12:59:38.111454581Z" level=info msg="TearDown network for sandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\" successfully" Mar 2 12:59:38.141327 containerd[1473]: time="2026-03-02T12:59:38.141153146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:38.141498 containerd[1473]: time="2026-03-02T12:59:38.141358588Z" level=info msg="RemovePodSandbox \"2b363ed5713cacfdb118617ff87699eacef9f483e692a9b2efd5a3c803f4418f\" returns successfully" Mar 2 12:59:38.149631 containerd[1473]: time="2026-03-02T12:59:38.149581890Z" level=info msg="StopPodSandbox for \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\"" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.203 [WARNING][5405] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"2f74b832-0faa-4b95-8876-eccbea5d41d7", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4", Pod:"calico-apiserver-5bc544cbd4-nx2cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8adb6701f7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.207 [INFO][5405] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.207 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" iface="eth0" netns="" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.207 [INFO][5405] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.207 [INFO][5405] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.264 [INFO][5415] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.264 [INFO][5415] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.264 [INFO][5415] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.274 [WARNING][5415] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.274 [INFO][5415] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.276 [INFO][5415] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:38.282836 containerd[1473]: 2026-03-02 12:59:38.279 [INFO][5405] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.283809 containerd[1473]: time="2026-03-02T12:59:38.283626273Z" level=info msg="TearDown network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\" successfully" Mar 2 12:59:38.283809 containerd[1473]: time="2026-03-02T12:59:38.283706302Z" level=info msg="StopPodSandbox for \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\" returns successfully" Mar 2 12:59:38.285095 containerd[1473]: time="2026-03-02T12:59:38.284931297Z" level=info msg="RemovePodSandbox for \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\"" Mar 2 12:59:38.285095 containerd[1473]: time="2026-03-02T12:59:38.285086707Z" level=info msg="Forcibly stopping sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\"" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.366 [WARNING][5435] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"2f74b832-0faa-4b95-8876-eccbea5d41d7", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"903c2d8e57af709a1c72df8b63dc85ac6a7692e0afe71e5f473c35f4b9c575b4", Pod:"calico-apiserver-5bc544cbd4-nx2cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8adb6701f7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.367 [INFO][5435] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.367 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" iface="eth0" netns="" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.367 [INFO][5435] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.367 [INFO][5435] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.395 [INFO][5443] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.395 [INFO][5443] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.395 [INFO][5443] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.405 [WARNING][5443] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.406 [INFO][5443] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" HandleID="k8s-pod-network.6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--nx2cs-eth0" Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.412 [INFO][5443] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:38.419387 containerd[1473]: 2026-03-02 12:59:38.415 [INFO][5435] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea" Mar 2 12:59:38.419387 containerd[1473]: time="2026-03-02T12:59:38.419277357Z" level=info msg="TearDown network for sandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\" successfully" Mar 2 12:59:38.430671 containerd[1473]: time="2026-03-02T12:59:38.430221286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:38.430671 containerd[1473]: time="2026-03-02T12:59:38.430326672Z" level=info msg="RemovePodSandbox \"6aac8a7787f920e10affda3c0f6b393b0848debcf74444ed91775cccc30efbea\" returns successfully" Mar 2 12:59:38.431830 containerd[1473]: time="2026-03-02T12:59:38.431428208Z" level=info msg="StopPodSandbox for \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\"" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.501 [WARNING][5461] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" WorkloadEndpoint="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.501 [INFO][5461] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.501 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" iface="eth0" netns="" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.501 [INFO][5461] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.501 [INFO][5461] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.545 [INFO][5469] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.546 [INFO][5469] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.546 [INFO][5469] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.554 [WARNING][5469] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.554 [INFO][5469] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.556 [INFO][5469] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:38.562343 containerd[1473]: 2026-03-02 12:59:38.559 [INFO][5461] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.562981 containerd[1473]: time="2026-03-02T12:59:38.562398028Z" level=info msg="TearDown network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\" successfully" Mar 2 12:59:38.562981 containerd[1473]: time="2026-03-02T12:59:38.562439494Z" level=info msg="StopPodSandbox for \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\" returns successfully" Mar 2 12:59:38.563467 containerd[1473]: time="2026-03-02T12:59:38.563333059Z" level=info msg="RemovePodSandbox for \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\"" Mar 2 12:59:38.563467 containerd[1473]: time="2026-03-02T12:59:38.563423448Z" level=info msg="Forcibly stopping sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\"" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.619 [WARNING][5488] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" WorkloadEndpoint="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.619 [INFO][5488] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.619 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" iface="eth0" netns="" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.619 [INFO][5488] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.619 [INFO][5488] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.656 [INFO][5497] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.656 [INFO][5497] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.656 [INFO][5497] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.667 [WARNING][5497] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.667 [INFO][5497] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" HandleID="k8s-pod-network.3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Workload="localhost-k8s-whisker--557c4f875b--4mvrb-eth0" Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.669 [INFO][5497] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:38.676794 containerd[1473]: 2026-03-02 12:59:38.673 [INFO][5488] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110" Mar 2 12:59:38.676794 containerd[1473]: time="2026-03-02T12:59:38.676777598Z" level=info msg="TearDown network for sandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\" successfully" Mar 2 12:59:38.689224 containerd[1473]: time="2026-03-02T12:59:38.689161779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:38.689387 containerd[1473]: time="2026-03-02T12:59:38.689239184Z" level=info msg="RemovePodSandbox \"3126f26afce14baaff82185045e207f1fa9ae7b1d5b92b475635a1fb92d78110\" returns successfully" Mar 2 12:59:38.690101 containerd[1473]: time="2026-03-02T12:59:38.689894845Z" level=info msg="StopPodSandbox for \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\"" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.750 [WARNING][5516] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d794842-cae6-42e3-92b8-3b3c0e54e550", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad", Pod:"coredns-674b8bbfcf-w7cmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba903407163", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.751 [INFO][5516] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.751 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" iface="eth0" netns="" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.751 [INFO][5516] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.751 [INFO][5516] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.781 [INFO][5525] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.781 [INFO][5525] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.782 [INFO][5525] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.793 [WARNING][5525] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.793 [INFO][5525] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.796 [INFO][5525] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:38.804145 containerd[1473]: 2026-03-02 12:59:38.800 [INFO][5516] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.804867 containerd[1473]: time="2026-03-02T12:59:38.804220352Z" level=info msg="TearDown network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\" successfully" Mar 2 12:59:38.804867 containerd[1473]: time="2026-03-02T12:59:38.804260527Z" level=info msg="StopPodSandbox for \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\" returns successfully" Mar 2 12:59:38.805299 containerd[1473]: time="2026-03-02T12:59:38.805256150Z" level=info msg="RemovePodSandbox for \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\"" Mar 2 12:59:38.805353 containerd[1473]: time="2026-03-02T12:59:38.805319939Z" level=info msg="Forcibly stopping sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\"" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.861 [WARNING][5542] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d794842-cae6-42e3-92b8-3b3c0e54e550", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12b8936c580b124bf797d083434acee62fff1c9b523203dd2b4302494f3bb9ad", Pod:"coredns-674b8bbfcf-w7cmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba903407163", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.862 [INFO][5542] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.862 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" iface="eth0" netns="" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.862 [INFO][5542] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.862 [INFO][5542] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.895 [INFO][5551] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.895 [INFO][5551] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.896 [INFO][5551] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.902 [WARNING][5551] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.902 [INFO][5551] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" HandleID="k8s-pod-network.f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Workload="localhost-k8s-coredns--674b8bbfcf--w7cmz-eth0" Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.906 [INFO][5551] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:38.913508 containerd[1473]: 2026-03-02 12:59:38.910 [INFO][5542] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9" Mar 2 12:59:38.913508 containerd[1473]: time="2026-03-02T12:59:38.913403586Z" level=info msg="TearDown network for sandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\" successfully" Mar 2 12:59:38.918616 containerd[1473]: time="2026-03-02T12:59:38.918532678Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:38.918686 containerd[1473]: time="2026-03-02T12:59:38.918636000Z" level=info msg="RemovePodSandbox \"f1489e3d7dfcf6c812b2fcb4890f65e53eefe288118cce0541c2c8186706e1e9\" returns successfully" Mar 2 12:59:38.919465 containerd[1473]: time="2026-03-02T12:59:38.919427082Z" level=info msg="StopPodSandbox for \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\"" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:38.969 [WARNING][5568] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1615fc41-91d4-4d09-afc6-7512c37dc161", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac", Pod:"coredns-674b8bbfcf-bpfqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83dc4ae15b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:38.970 [INFO][5568] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:38.970 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" iface="eth0" netns="" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:38.970 [INFO][5568] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:38.970 [INFO][5568] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:39.003 [INFO][5576] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:39.003 [INFO][5576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:39.003 [INFO][5576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:39.011 [WARNING][5576] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:39.011 [INFO][5576] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:39.013 [INFO][5576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:39.020507 containerd[1473]: 2026-03-02 12:59:39.017 [INFO][5568] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.020968 containerd[1473]: time="2026-03-02T12:59:39.020536593Z" level=info msg="TearDown network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\" successfully" Mar 2 12:59:39.020968 containerd[1473]: time="2026-03-02T12:59:39.020580184Z" level=info msg="StopPodSandbox for \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\" returns successfully" Mar 2 12:59:39.021449 containerd[1473]: time="2026-03-02T12:59:39.021404471Z" level=info msg="RemovePodSandbox for \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\"" Mar 2 12:59:39.021487 containerd[1473]: time="2026-03-02T12:59:39.021460035Z" level=info msg="Forcibly stopping sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\"" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.079 [WARNING][5594] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1615fc41-91d4-4d09-afc6-7512c37dc161", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80f8e77c844f136ed02f55068d331fd3203c79438c4b63afc2a5715cac9928ac", Pod:"coredns-674b8bbfcf-bpfqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83dc4ae15b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.080 [INFO][5594] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.080 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" iface="eth0" netns="" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.080 [INFO][5594] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.080 [INFO][5594] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.123 [INFO][5603] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.123 [INFO][5603] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.123 [INFO][5603] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.132 [WARNING][5603] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.132 [INFO][5603] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" HandleID="k8s-pod-network.3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Workload="localhost-k8s-coredns--674b8bbfcf--bpfqk-eth0" Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.135 [INFO][5603] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:39.143598 containerd[1473]: 2026-03-02 12:59:39.139 [INFO][5594] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf" Mar 2 12:59:39.144149 containerd[1473]: time="2026-03-02T12:59:39.143631748Z" level=info msg="TearDown network for sandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\" successfully" Mar 2 12:59:39.149309 containerd[1473]: time="2026-03-02T12:59:39.149267116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:39.149386 containerd[1473]: time="2026-03-02T12:59:39.149353918Z" level=info msg="RemovePodSandbox \"3a069b04a06f00a5e23481ea6f7445c56abefda07ca10e7813b00560f7d8d0cf\" returns successfully" Mar 2 12:59:39.150402 containerd[1473]: time="2026-03-02T12:59:39.150355884Z" level=info msg="StopPodSandbox for \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\"" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.207 [WARNING][5620] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0", GenerateName:"calico-kube-controllers-74c4f95764-", Namespace:"calico-system", SelfLink:"", UID:"6ddf42e0-6cd1-4b95-8cfd-884ff77a512d", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c4f95764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4", Pod:"calico-kube-controllers-74c4f95764-z2fkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c9aea08971", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.207 [INFO][5620] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.207 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" iface="eth0" netns="" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.207 [INFO][5620] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.207 [INFO][5620] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.245 [INFO][5628] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.246 [INFO][5628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.246 [INFO][5628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.264 [WARNING][5628] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.264 [INFO][5628] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.267 [INFO][5628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:39.273425 containerd[1473]: 2026-03-02 12:59:39.270 [INFO][5620] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.273425 containerd[1473]: time="2026-03-02T12:59:39.273379760Z" level=info msg="TearDown network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\" successfully" Mar 2 12:59:39.273425 containerd[1473]: time="2026-03-02T12:59:39.273416889Z" level=info msg="StopPodSandbox for \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\" returns successfully" Mar 2 12:59:39.274391 containerd[1473]: time="2026-03-02T12:59:39.274211880Z" level=info msg="RemovePodSandbox for \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\"" Mar 2 12:59:39.274391 containerd[1473]: time="2026-03-02T12:59:39.274247726Z" level=info msg="Forcibly stopping sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\"" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.339 [WARNING][5646] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0", GenerateName:"calico-kube-controllers-74c4f95764-", Namespace:"calico-system", SelfLink:"", UID:"6ddf42e0-6cd1-4b95-8cfd-884ff77a512d", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c4f95764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbcb9a80bf76e4526d2bbb11d874fdc45259e7e78cc324a08fd5e312ce2734c4", Pod:"calico-kube-controllers-74c4f95764-z2fkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c9aea08971", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.340 [INFO][5646] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.340 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" iface="eth0" netns="" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.340 [INFO][5646] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.340 [INFO][5646] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.374 [INFO][5655] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.374 [INFO][5655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.374 [INFO][5655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.384 [WARNING][5655] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.384 [INFO][5655] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" HandleID="k8s-pod-network.232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Workload="localhost-k8s-calico--kube--controllers--74c4f95764--z2fkz-eth0" Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.387 [INFO][5655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:39.395824 containerd[1473]: 2026-03-02 12:59:39.391 [INFO][5646] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2" Mar 2 12:59:39.396580 containerd[1473]: time="2026-03-02T12:59:39.395858809Z" level=info msg="TearDown network for sandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\" successfully" Mar 2 12:59:39.401728 containerd[1473]: time="2026-03-02T12:59:39.401651512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:39.401960 containerd[1473]: time="2026-03-02T12:59:39.401777155Z" level=info msg="RemovePodSandbox \"232b0a7a2cf6b68906bf8fb7353436fbb2ecc8d8b0d25a7f25b884439b0b58f2\" returns successfully" Mar 2 12:59:39.403243 containerd[1473]: time="2026-03-02T12:59:39.403206693Z" level=info msg="StopPodSandbox for \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\"" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.473 [WARNING][5674] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"292cf7f8-5770-4cfe-98b8-b56cbdd122c1", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8", Pod:"calico-apiserver-5bc544cbd4-7q7cm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliacf5f7fd8a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.473 [INFO][5674] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.473 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" iface="eth0" netns="" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.473 [INFO][5674] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.473 [INFO][5674] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.510 [INFO][5682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.511 [INFO][5682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.511 [INFO][5682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.521 [WARNING][5682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.521 [INFO][5682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.523 [INFO][5682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:39.529790 containerd[1473]: 2026-03-02 12:59:39.526 [INFO][5674] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.529790 containerd[1473]: time="2026-03-02T12:59:39.529704541Z" level=info msg="TearDown network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\" successfully" Mar 2 12:59:39.529790 containerd[1473]: time="2026-03-02T12:59:39.529731983Z" level=info msg="StopPodSandbox for \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\" returns successfully" Mar 2 12:59:39.531414 containerd[1473]: time="2026-03-02T12:59:39.531282848Z" level=info msg="RemovePodSandbox for \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\"" Mar 2 12:59:39.531414 containerd[1473]: time="2026-03-02T12:59:39.531321641Z" level=info msg="Forcibly stopping sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\"" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.583 [WARNING][5700] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0", GenerateName:"calico-apiserver-5bc544cbd4-", Namespace:"calico-system", SelfLink:"", UID:"292cf7f8-5770-4cfe-98b8-b56cbdd122c1", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc544cbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b15291500c1871c42bb36c61aa49c34a64e0e71446f707e000c7caae7e8c91c8", Pod:"calico-apiserver-5bc544cbd4-7q7cm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliacf5f7fd8a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.584 [INFO][5700] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.584 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" iface="eth0" netns="" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.584 [INFO][5700] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.584 [INFO][5700] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.620 [INFO][5708] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.620 [INFO][5708] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.620 [INFO][5708] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.628 [WARNING][5708] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.628 [INFO][5708] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" HandleID="k8s-pod-network.d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Workload="localhost-k8s-calico--apiserver--5bc544cbd4--7q7cm-eth0" Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.630 [INFO][5708] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:39.639109 containerd[1473]: 2026-03-02 12:59:39.634 [INFO][5700] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785" Mar 2 12:59:39.639109 containerd[1473]: time="2026-03-02T12:59:39.638445031Z" level=info msg="TearDown network for sandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\" successfully" Mar 2 12:59:39.644079 containerd[1473]: time="2026-03-02T12:59:39.643852049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 12:59:39.644079 containerd[1473]: time="2026-03-02T12:59:39.643985476Z" level=info msg="RemovePodSandbox \"d837f1beeded7bf476a94ba5f2620a8b879c85df6847dac22425cba3dd7f9785\" returns successfully" Mar 2 12:59:46.720607 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:50166.service - OpenSSH per-connection server daemon (10.0.0.1:50166). Mar 2 12:59:46.790672 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 50166 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:59:46.794229 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:46.801661 systemd-logind[1456]: New session 8 of user core. Mar 2 12:59:46.810659 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 12:59:47.277372 sshd[5765]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:47.282475 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:50166.service: Deactivated successfully. Mar 2 12:59:47.284557 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 12:59:47.285767 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Mar 2 12:59:47.287450 systemd-logind[1456]: Removed session 8. Mar 2 12:59:52.301585 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:46722.service - OpenSSH per-connection server daemon (10.0.0.1:46722). Mar 2 12:59:52.345793 sshd[5809]: Accepted publickey for core from 10.0.0.1 port 46722 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:59:52.347882 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:52.354460 systemd-logind[1456]: New session 9 of user core. Mar 2 12:59:52.361318 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 12:59:52.557219 sshd[5809]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:52.563616 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:46722.service: Deactivated successfully. Mar 2 12:59:52.566199 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 12:59:52.567260 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Mar 2 12:59:52.569358 systemd-logind[1456]: Removed session 9. Mar 2 12:59:57.588439 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:46734.service - OpenSSH per-connection server daemon (10.0.0.1:46734). Mar 2 12:59:57.622577 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 46734 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 12:59:57.635879 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:57.644683 systemd-logind[1456]: New session 10 of user core. Mar 2 12:59:57.650243 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 12:59:57.793670 sshd[5844]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:57.797611 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:46734.service: Deactivated successfully. Mar 2 12:59:57.800844 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 12:59:57.803580 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Mar 2 12:59:57.805340 systemd-logind[1456]: Removed session 10. Mar 2 12:59:59.768288 kubelet[2554]: E0302 12:59:59.767660 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:00.496096 systemd[1]: run-containerd-runc-k8s.io-74c9c84f1173e1468aa3435e5df48b235d67af49dff2726571686045584672e1-runc.ew0Uiv.mount: Deactivated successfully. Mar 2 13:00:01.742841 kubelet[2554]: E0302 13:00:01.742704 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:02.658479 kubelet[2554]: I0302 13:00:02.658247 2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:00:02.807309 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:55132.service - OpenSSH per-connection server daemon (10.0.0.1:55132). Mar 2 13:00:02.905236 sshd[5887]: Accepted publickey for core from 10.0.0.1 port 55132 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:02.907838 sshd[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:02.929351 systemd-logind[1456]: New session 11 of user core. Mar 2 13:00:02.937419 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:00:03.139820 sshd[5887]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:03.150613 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:55132.service: Deactivated successfully. Mar 2 13:00:03.153906 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:00:03.155301 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:00:03.157496 systemd-logind[1456]: Removed session 11. Mar 2 13:00:08.175539 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:55146.service - OpenSSH per-connection server daemon (10.0.0.1:55146). Mar 2 13:00:08.229710 sshd[5919]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:08.232811 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:08.239359 systemd-logind[1456]: New session 12 of user core. Mar 2 13:00:08.252367 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:00:08.869753 sshd[5919]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:08.877161 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:55146.service: Deactivated successfully. Mar 2 13:00:08.880413 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:00:08.881871 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:00:08.883626 systemd-logind[1456]: Removed session 12. Mar 2 13:00:09.743697 kubelet[2554]: E0302 13:00:09.743629 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:13.884495 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:41440.service - OpenSSH per-connection server daemon (10.0.0.1:41440). Mar 2 13:00:13.936331 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 41440 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:13.938442 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:13.945684 systemd-logind[1456]: New session 13 of user core. Mar 2 13:00:13.955284 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:00:14.093699 sshd[5934]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:14.099730 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:41440.service: Deactivated successfully. Mar 2 13:00:14.102946 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:00:14.104311 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:00:14.106172 systemd-logind[1456]: Removed session 13. Mar 2 13:00:15.742790 kubelet[2554]: E0302 13:00:15.742657 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:19.113434 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:41452.service - OpenSSH per-connection server daemon (10.0.0.1:41452). Mar 2 13:00:19.180930 sshd[5951]: Accepted publickey for core from 10.0.0.1 port 41452 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:19.183266 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:19.191195 systemd-logind[1456]: New session 14 of user core. Mar 2 13:00:19.200286 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:00:19.384826 sshd[5951]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:19.391586 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:41452.service: Deactivated successfully. Mar 2 13:00:19.394503 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:00:19.396890 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:00:19.398844 systemd-logind[1456]: Removed session 14. Mar 2 13:00:24.402300 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:42090.service - OpenSSH per-connection server daemon (10.0.0.1:42090). Mar 2 13:00:24.436632 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 42090 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:24.438557 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:24.445120 systemd-logind[1456]: New session 15 of user core. Mar 2 13:00:24.451216 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:00:24.594129 sshd[6011]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:24.598794 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:42090.service: Deactivated successfully. Mar 2 13:00:24.601113 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:00:24.602422 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:00:24.604220 systemd-logind[1456]: Removed session 15. Mar 2 13:00:29.654883 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:42094.service - OpenSSH per-connection server daemon (10.0.0.1:42094). Mar 2 13:00:29.718959 sshd[6046]: Accepted publickey for core from 10.0.0.1 port 42094 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:29.723582 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:29.734987 systemd-logind[1456]: New session 16 of user core. Mar 2 13:00:29.748321 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:00:29.926887 sshd[6046]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:29.933846 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:42094.service: Deactivated successfully. Mar 2 13:00:29.937578 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:00:29.939540 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:00:29.941484 systemd-logind[1456]: Removed session 16. Mar 2 13:00:34.965720 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:34088.service - OpenSSH per-connection server daemon (10.0.0.1:34088). Mar 2 13:00:35.004928 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:35.007777 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:35.036286 systemd-logind[1456]: New session 17 of user core. Mar 2 13:00:35.041285 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:00:35.250266 sshd[6104]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:35.261596 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:34088.service: Deactivated successfully. Mar 2 13:00:35.264499 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:00:35.267536 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:00:35.280931 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:34104.service - OpenSSH per-connection server daemon (10.0.0.1:34104). Mar 2 13:00:35.284507 systemd-logind[1456]: Removed session 17. Mar 2 13:00:35.342308 sshd[6120]: Accepted publickey for core from 10.0.0.1 port 34104 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:35.345662 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:35.353778 systemd-logind[1456]: New session 18 of user core. Mar 2 13:00:35.363463 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:00:35.633368 sshd[6120]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:35.645700 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:34104.service: Deactivated successfully. Mar 2 13:00:35.649750 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:00:35.654413 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:00:35.661957 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:34120.service - OpenSSH per-connection server daemon (10.0.0.1:34120). Mar 2 13:00:35.667686 systemd-logind[1456]: Removed session 18. Mar 2 13:00:35.719614 sshd[6133]: Accepted publickey for core from 10.0.0.1 port 34120 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:35.723885 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:35.737248 systemd-logind[1456]: New session 19 of user core. Mar 2 13:00:35.744134 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:00:35.927834 sshd[6133]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:35.934103 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:34120.service: Deactivated successfully. Mar 2 13:00:35.938194 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:00:35.941580 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:00:35.944702 systemd-logind[1456]: Removed session 19. Mar 2 13:00:37.747582 kubelet[2554]: E0302 13:00:37.747155 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:40.742458 kubelet[2554]: E0302 13:00:40.742285 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:40.942553 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:34122.service - OpenSSH per-connection server daemon (10.0.0.1:34122). Mar 2 13:00:40.999346 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 34122 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:41.001590 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:41.009395 systemd-logind[1456]: New session 20 of user core. Mar 2 13:00:41.018281 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:00:41.194180 sshd[6150]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:41.200218 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:34122.service: Deactivated successfully. Mar 2 13:00:41.204824 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:00:41.207286 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:00:41.209615 systemd-logind[1456]: Removed session 20. Mar 2 13:00:46.215672 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:56996.service - OpenSSH per-connection server daemon (10.0.0.1:56996). Mar 2 13:00:46.279912 sshd[6180]: Accepted publickey for core from 10.0.0.1 port 56996 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:46.282255 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:46.288874 systemd-logind[1456]: New session 21 of user core. Mar 2 13:00:46.300312 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:00:46.484277 sshd[6180]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:46.489867 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:56996.service: Deactivated successfully. Mar 2 13:00:46.493566 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:00:46.494788 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:00:46.497345 systemd-logind[1456]: Removed session 21. Mar 2 13:00:50.255589 systemd[1]: run-containerd-runc-k8s.io-117e14d3954c0d7aabf8d1762544a501734e63404c890063b7334a6a9cc2e37b-runc.3FWx1O.mount: Deactivated successfully. Mar 2 13:00:51.513788 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:57008.service - OpenSSH per-connection server daemon (10.0.0.1:57008). Mar 2 13:00:51.583284 sshd[6241]: Accepted publickey for core from 10.0.0.1 port 57008 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:51.586559 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:51.595123 systemd-logind[1456]: New session 22 of user core. Mar 2 13:00:51.603298 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:00:51.906403 sshd[6241]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:51.936763 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:57008.service: Deactivated successfully. Mar 2 13:00:51.941263 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:00:51.945474 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:00:51.947538 systemd-logind[1456]: Removed session 22. Mar 2 13:00:54.741956 kubelet[2554]: E0302 13:00:54.741849 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:56.937831 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:59876.service - OpenSSH per-connection server daemon (10.0.0.1:59876). Mar 2 13:00:56.994750 sshd[6268]: Accepted publickey for core from 10.0.0.1 port 59876 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:56.996832 sshd[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:57.003526 systemd-logind[1456]: New session 23 of user core. Mar 2 13:00:57.010451 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:00:57.179378 sshd[6268]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:57.192849 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:59876.service: Deactivated successfully. Mar 2 13:00:57.195908 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:00:57.199087 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:00:57.206655 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:59886.service - OpenSSH per-connection server daemon (10.0.0.1:59886). Mar 2 13:00:57.208755 systemd-logind[1456]: Removed session 23. Mar 2 13:00:57.249493 sshd[6283]: Accepted publickey for core from 10.0.0.1 port 59886 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:57.251745 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:57.258581 systemd-logind[1456]: New session 24 of user core. Mar 2 13:00:57.268261 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:00:57.729274 sshd[6283]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:57.740140 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:59888.service - OpenSSH per-connection server daemon (10.0.0.1:59888). Mar 2 13:00:57.743578 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:59886.service: Deactivated successfully. Mar 2 13:00:57.762690 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:00:57.765817 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:00:57.768825 systemd-logind[1456]: Removed session 24. Mar 2 13:00:57.822123 sshd[6333]: Accepted publickey for core from 10.0.0.1 port 59888 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:57.825322 sshd[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:57.833226 systemd-logind[1456]: New session 25 of user core. Mar 2 13:00:57.847388 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:00:58.767369 sshd[6333]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:58.774798 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:59888.service: Deactivated successfully. Mar 2 13:00:58.777261 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:00:58.778756 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:00:58.792971 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:59904.service - OpenSSH per-connection server daemon (10.0.0.1:59904). Mar 2 13:00:58.795515 systemd-logind[1456]: Removed session 25. Mar 2 13:00:58.858539 sshd[6365]: Accepted publickey for core from 10.0.0.1 port 59904 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:58.860816 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:58.869218 systemd-logind[1456]: New session 26 of user core. Mar 2 13:00:58.880393 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:01:00.483643 sshd[6365]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:00.593643 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:59904.service: Deactivated successfully. Mar 2 13:01:00.602134 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:01:00.602774 systemd[1]: session-26.scope: Consumed 1.289s CPU time. Mar 2 13:01:00.605400 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:01:00.619424 systemd[1]: Started sshd@26-10.0.0.34:22-10.0.0.1:59908.service - OpenSSH per-connection server daemon (10.0.0.1:59908). Mar 2 13:01:00.622214 systemd-logind[1456]: Removed session 26. Mar 2 13:01:00.830194 sshd[6397]: Accepted publickey for core from 10.0.0.1 port 59908 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:01:00.876407 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:00.898517 systemd-logind[1456]: New session 27 of user core. Mar 2 13:01:00.907455 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:01:01.197179 sshd[6397]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:01.203216 systemd[1]: sshd@26-10.0.0.34:22-10.0.0.1:59908.service: Deactivated successfully. Mar 2 13:01:01.207137 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:01:01.210569 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:01:01.237169 systemd-logind[1456]: Removed session 27. Mar 2 13:01:06.226205 systemd[1]: Started sshd@27-10.0.0.34:22-10.0.0.1:34882.service - OpenSSH per-connection server daemon (10.0.0.1:34882). Mar 2 13:01:06.261504 sshd[6418]: Accepted publickey for core from 10.0.0.1 port 34882 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:01:06.264235 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:06.272193 systemd-logind[1456]: New session 28 of user core. Mar 2 13:01:06.286429 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:01:06.556688 sshd[6418]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:06.562355 systemd-logind[1456]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:01:06.562985 systemd[1]: sshd@27-10.0.0.34:22-10.0.0.1:34882.service: Deactivated successfully. Mar 2 13:01:06.567835 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:01:06.572258 systemd-logind[1456]: Removed session 28. Mar 2 13:01:09.742981 kubelet[2554]: E0302 13:01:09.742861 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:11.575410 systemd[1]: Started sshd@28-10.0.0.34:22-10.0.0.1:34890.service - OpenSSH per-connection server daemon (10.0.0.1:34890). Mar 2 13:01:11.660916 sshd[6437]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:01:11.663280 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:11.670078 systemd-logind[1456]: New session 29 of user core. Mar 2 13:01:11.679365 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:01:11.884610 sshd[6437]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:11.888903 systemd[1]: sshd@28-10.0.0.34:22-10.0.0.1:34890.service: Deactivated successfully. Mar 2 13:01:11.892328 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:01:11.895375 systemd-logind[1456]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:01:11.896962 systemd-logind[1456]: Removed session 29. Mar 2 13:01:14.744062 kubelet[2554]: E0302 13:01:14.743798 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:16.901427 systemd[1]: Started sshd@29-10.0.0.34:22-10.0.0.1:42096.service - OpenSSH per-connection server daemon (10.0.0.1:42096). Mar 2 13:01:17.023338 sshd[6452]: Accepted publickey for core from 10.0.0.1 port 42096 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:01:17.026260 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:17.039422 systemd-logind[1456]: New session 30 of user core. Mar 2 13:01:17.046376 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 13:01:17.249165 sshd[6452]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:17.255135 systemd[1]: sshd@29-10.0.0.34:22-10.0.0.1:42096.service: Deactivated successfully. Mar 2 13:01:17.258809 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 13:01:17.260178 systemd-logind[1456]: Session 30 logged out. Waiting for processes to exit. Mar 2 13:01:17.262142 systemd-logind[1456]: Removed session 30. Mar 2 13:01:17.743071 kubelet[2554]: E0302 13:01:17.742918 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:21.674168 systemd[1]: run-containerd-runc-k8s.io-74c9c84f1173e1468aa3435e5df48b235d67af49dff2726571686045584672e1-runc.6HmuwE.mount: Deactivated successfully. Mar 2 13:01:22.265430 systemd[1]: Started sshd@30-10.0.0.34:22-10.0.0.1:48768.service - OpenSSH per-connection server daemon (10.0.0.1:48768). Mar 2 13:01:22.343156 sshd[6512]: Accepted publickey for core from 10.0.0.1 port 48768 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:01:22.345549 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:22.352722 systemd-logind[1456]: New session 31 of user core. Mar 2 13:01:22.359348 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 13:01:22.607736 sshd[6512]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:22.615760 systemd[1]: sshd@30-10.0.0.34:22-10.0.0.1:48768.service: Deactivated successfully. Mar 2 13:01:22.621943 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 13:01:22.633834 systemd-logind[1456]: Session 31 logged out. Waiting for processes to exit. Mar 2 13:01:22.654376 systemd-logind[1456]: Removed session 31.