Mar 14 00:43:16.599005 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:43:16.599036 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:43:16.599053 kernel: BIOS-provided physical RAM map: Mar 14 00:43:16.599062 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:43:16.599071 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:43:16.599079 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:43:16.599091 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 14 00:43:16.599102 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 14 00:43:16.599109 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:43:16.599125 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:43:16.599134 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:43:16.599143 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:43:16.599152 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:43:16.599161 kernel: NX (Execute Disable) protection: active Mar 14 00:43:16.599172 kernel: APIC: Static calls initialized Mar 14 00:43:16.599229 kernel: SMBIOS 2.8 present. Mar 14 00:43:16.599244 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 14 00:43:16.599254 kernel: Hypervisor detected: KVM Mar 14 00:43:16.599263 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:43:16.599271 kernel: kvm-clock: using sched offset of 14176240852 cycles Mar 14 00:43:16.599280 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:43:16.599289 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:43:16.599302 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:43:16.599313 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:43:16.599330 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 14 00:43:16.599340 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:43:16.599352 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:43:16.599362 kernel: Using GB pages for direct mapping Mar 14 00:43:16.599373 kernel: ACPI: Early table checksum verification disabled Mar 14 00:43:16.599384 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 14 00:43:16.599396 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:43:16.599406 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:43:16.599416 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:43:16.599439 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 14 00:43:16.599566 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:43:16.599579 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:43:16.599589 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:43:16.599599 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:43:16.599656 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 14 00:43:16.599672 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 14 00:43:16.599690 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 14 00:43:16.599706 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 14 00:43:16.599717 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 14 00:43:16.599727 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 14 00:43:16.599738 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 14 00:43:16.599748 kernel: No NUMA configuration found Mar 14 00:43:16.599759 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 14 00:43:16.599772 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 14 00:43:16.599782 kernel: Zone ranges: Mar 14 00:43:16.599794 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:43:16.599803 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 14 00:43:16.599814 kernel: Normal empty Mar 14 00:43:16.599827 kernel: Movable zone start for each node Mar 14 00:43:16.599838 kernel: Early memory node ranges Mar 14 00:43:16.599849 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:43:16.599862 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 14 00:43:16.599871 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 14 00:43:16.599889 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:43:16.599899 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:43:16.599909 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 14 00:43:16.599921 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:43:16.599931 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:43:16.599942 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:43:16.599952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:43:16.599962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:43:16.599974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:43:16.599990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:43:16.600002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:43:16.600012 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:43:16.600024 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:43:16.600036 kernel: TSC deadline timer available Mar 14 00:43:16.600046 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:43:16.600057 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:43:16.600069 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:43:16.600123 kernel: kvm-guest: setup PV sched yield Mar 14 00:43:16.600140 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:43:16.600150 kernel: Booting paravirtualized kernel on KVM Mar 14 00:43:16.600161 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:43:16.600171 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:43:16.600181 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:43:16.600192 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:43:16.600203 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:43:16.600212 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:43:16.600222 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:43:16.600241 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:43:16.600252 kernel: random: crng init done Mar 14 00:43:16.600264 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:43:16.600275 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:43:16.600287 kernel: Fallback order for Node 0: 0 Mar 14 00:43:16.600298 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 14 00:43:16.600309 kernel: Policy zone: DMA32 Mar 14 00:43:16.600320 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:43:16.600334 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 14 00:43:16.600345 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:43:16.600356 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:43:16.600366 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:43:16.600378 kernel: Dynamic Preempt: voluntary Mar 14 00:43:16.600389 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:43:16.600400 kernel: rcu: RCU event tracing is enabled. Mar 14 00:43:16.600411 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:43:16.600422 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:43:16.600437 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:43:16.600448 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:43:16.600460 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:43:16.600470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:43:16.600562 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:43:16.600580 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:43:16.600591 kernel: Console: colour VGA+ 80x25 Mar 14 00:43:16.600602 kernel: printk: console [ttyS0] enabled Mar 14 00:43:16.600662 kernel: ACPI: Core revision 20230628 Mar 14 00:43:16.600681 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:43:16.600692 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:43:16.600703 kernel: x2apic enabled Mar 14 00:43:16.600713 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:43:16.600723 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:43:16.600734 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:43:16.600744 kernel: kvm-guest: setup PV IPIs Mar 14 00:43:16.600755 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:43:16.600781 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:43:16.600792 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:43:16.600802 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:43:16.600813 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:43:16.600829 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:43:16.600841 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:43:16.600852 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:43:16.600865 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:43:16.600877 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:43:16.600893 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:43:16.600958 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:43:16.600971 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:43:16.600983 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:43:16.600995 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:43:16.601006 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:43:16.601018 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:43:16.601030 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:43:16.601048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:43:16.601059 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:43:16.601070 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:43:16.601081 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:43:16.601094 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:43:16.601106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:43:16.601116 kernel: landlock: Up and running. Mar 14 00:43:16.601127 kernel: SELinux: Initializing. Mar 14 00:43:16.601140 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:43:16.601156 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:43:16.601166 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:43:16.601177 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:43:16.601189 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:43:16.601200 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:43:16.601213 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:43:16.601224 kernel: signal: max sigframe size: 1776 Mar 14 00:43:16.601237 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:43:16.601249 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:43:16.601267 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:43:16.601277 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:43:16.601288 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:43:16.601298 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:43:16.601310 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:43:16.601323 kernel: smpboot: Max logical packages: 1 Mar 14 00:43:16.601334 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:43:16.601344 kernel: devtmpfs: initialized Mar 14 00:43:16.601355 kernel: x86/mm: Memory block size: 128MB Mar 14 00:43:16.601372 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:43:16.601383 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:43:16.601394 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:43:16.601406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:43:16.601417 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:43:16.601428 kernel: audit: type=2000 audit(1773448990.024:1): state=initialized audit_enabled=0 res=1 Mar 14 00:43:16.601440 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:43:16.601451 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:43:16.601462 kernel: cpuidle: using governor menu Mar 14 00:43:16.601478 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:43:16.601570 kernel: dca service started, version 1.12.1 Mar 14 00:43:16.601582 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:43:16.601595 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:43:16.601606 kernel: PCI: Using configuration type 1 for base access Mar 14 00:43:16.601659 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:43:16.601670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:43:16.601681 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:43:16.601695 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:43:16.601711 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:43:16.601722 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:43:16.601735 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:43:16.601746 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:43:16.601757 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:43:16.601770 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:43:16.601781 kernel: ACPI: Interpreter enabled Mar 14 00:43:16.601792 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:43:16.601802 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:43:16.601820 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:43:16.601832 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:43:16.601844 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:43:16.601857 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:43:16.603199 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:43:16.603596 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:43:16.603900 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:43:16.603927 kernel: PCI host bridge to bus 0000:00 Mar 14 00:43:16.604332 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:43:16.604863 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:43:16.605073 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:43:16.605278 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:43:16.605440 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:43:16.605741 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 14 00:43:16.605971 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:43:16.606433 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:43:16.607425 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:43:16.607720 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:43:16.609409 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:43:16.609725 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:43:16.609961 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:43:16.610332 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:43:16.610723 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:43:16.610936 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:43:16.611140 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:43:16.611454 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:43:16.612462 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:43:16.612909 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:43:16.613161 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:43:16.613751 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:43:16.613986 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 14 00:43:16.614118 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:43:16.614315 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 14 00:43:16.614580 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:43:16.614882 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:43:16.615087 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:43:16.615354 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:43:16.615596 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 14 00:43:16.615799 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 14 00:43:16.616112 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:43:16.616271 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:43:16.616288 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:43:16.616295 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:43:16.616302 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:43:16.616309 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:43:16.616316 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:43:16.616323 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:43:16.616329 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:43:16.616336 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:43:16.616345 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:43:16.616352 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:43:16.616358 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:43:16.616365 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:43:16.616371 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:43:16.616378 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:43:16.616385 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:43:16.616391 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:43:16.616398 kernel: iommu: Default domain type: Translated Mar 14 00:43:16.616407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:43:16.616414 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:43:16.616420 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:43:16.616427 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:43:16.616434 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 14 00:43:16.616675 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:43:16.616801 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:43:16.616981 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:43:16.616994 kernel: vgaarb: loaded Mar 14 00:43:16.617007 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:43:16.617014 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:43:16.617021 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:43:16.617027 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:43:16.617034 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:43:16.617041 kernel: pnp: PnP ACPI init Mar 14 00:43:16.617358 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:43:16.617372 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:43:16.617386 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:43:16.617393 kernel: NET: Registered PF_INET protocol family Mar 14 00:43:16.617400 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:43:16.617412 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:43:16.617425 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:43:16.617438 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:43:16.617448 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:43:16.617458 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:43:16.617468 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:43:16.617582 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:43:16.617596 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:43:16.617608 kernel: NET: Registered PF_XDP protocol family Mar 14 00:43:16.617889 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:43:16.618107 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:43:16.618327 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:43:16.618657 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:43:16.618828 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:43:16.618991 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 14 00:43:16.619006 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:43:16.619018 kernel: Initialise system trusted keyrings Mar 14 00:43:16.619030 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:43:16.619042 kernel: Key type asymmetric registered Mar 14 00:43:16.619055 kernel: Asymmetric key parser 'x509' registered Mar 14 00:43:16.619068 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:43:16.619080 kernel: io scheduler mq-deadline registered Mar 14 00:43:16.619091 kernel: io scheduler kyber registered Mar 14 00:43:16.619109 kernel: io scheduler bfq registered Mar 14 00:43:16.619122 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:43:16.619132 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:43:16.619142 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:43:16.619152 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:43:16.619164 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:43:16.619179 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:43:16.619192 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:43:16.619205 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:43:16.619224 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:43:16.619718 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:43:16.619742 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:43:16.619919 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:43:16.620084 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:43:15 UTC (1773448995) Mar 14 00:43:16.620306 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:43:16.620325 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:43:16.620337 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:43:16.620355 kernel: Segment Routing with IPv6 Mar 14 00:43:16.620367 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:43:16.620379 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:43:16.620391 kernel: Key type dns_resolver registered Mar 14 00:43:16.620403 kernel: IPI shorthand broadcast: enabled Mar 14 00:43:16.620415 kernel: sched_clock: Marking stable (4223040041, 1091538361)->(6009875975, -695297573) Mar 14 00:43:16.620426 kernel: registered taskstats version 1 Mar 14 00:43:16.620439 kernel: Loading compiled-in X.509 certificates Mar 14 00:43:16.620451 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:43:16.620467 kernel: Key type .fscrypt registered Mar 14 00:43:16.620478 kernel: Key type fscrypt-provisioning registered Mar 14 00:43:16.620572 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:43:16.620584 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:43:16.620596 kernel: ima: No architecture policies found Mar 14 00:43:16.620609 kernel: clk: Disabling unused clocks Mar 14 00:43:16.620667 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:43:16.620680 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:43:16.620692 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:43:16.620709 kernel: Run /init as init process Mar 14 00:43:16.620720 kernel: with arguments: Mar 14 00:43:16.620732 kernel: /init Mar 14 00:43:16.620746 kernel: with environment: Mar 14 00:43:16.620757 kernel: HOME=/ Mar 14 00:43:16.620768 kernel: TERM=linux Mar 14 00:43:16.620782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:43:16.620796 systemd[1]: Detected virtualization kvm. Mar 14 00:43:16.620812 systemd[1]: Detected architecture x86-64. Mar 14 00:43:16.620825 systemd[1]: Running in initrd. Mar 14 00:43:16.620836 systemd[1]: No hostname configured, using default hostname. Mar 14 00:43:16.620847 systemd[1]: Hostname set to . Mar 14 00:43:16.620860 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:43:16.620872 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:43:16.620884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:43:16.620896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:43:16.620913 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:43:16.620925 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:43:16.620937 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:43:16.620950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:43:16.620965 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:43:16.620977 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:43:16.620993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:43:16.621005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:43:16.621017 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:43:16.621030 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:43:16.621042 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:43:16.621074 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:43:16.621089 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:43:16.621105 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:43:16.621118 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:43:16.621131 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:43:16.621142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:43:16.621153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:43:16.621164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:43:16.621178 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:43:16.621192 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:43:16.621211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:43:16.621222 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:43:16.621233 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:43:16.621246 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:43:16.621258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:43:16.621271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:43:16.621283 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:43:16.621328 systemd-journald[195]: Collecting audit messages is disabled. Mar 14 00:43:16.621369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:43:16.621382 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:43:16.621398 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:43:16.621411 systemd-journald[195]: Journal started Mar 14 00:43:16.621437 systemd-journald[195]: Runtime Journal (/run/log/journal/fa4a62fd6a6340aca48aa21ee3cc7b17) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:43:16.632691 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:43:16.640787 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:43:16.646058 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:43:16.648365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:43:16.711227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:43:16.712864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:43:16.768868 systemd-modules-load[196]: Inserted module 'overlay' Mar 14 00:43:17.024222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:43:17.024318 kernel: Bridge firewalling registered Mar 14 00:43:16.818947 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 14 00:43:16.822671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:43:17.043813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:43:17.044327 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:43:17.069112 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:43:17.099568 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:43:17.120007 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:43:17.129018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:43:17.146819 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:43:17.199936 dracut-cmdline[232]: dracut-dracut-053 Mar 14 00:43:17.205345 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:43:17.210919 systemd-resolved[225]: Positive Trust Anchors: Mar 14 00:43:17.210930 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:43:17.210975 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:43:17.215879 systemd-resolved[225]: Defaulting to hostname 'linux'. Mar 14 00:43:17.217862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:43:17.241220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:43:17.365651 kernel: SCSI subsystem initialized Mar 14 00:43:17.385592 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:43:17.406960 kernel: iscsi: registered transport (tcp) Mar 14 00:43:17.440151 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:43:17.440290 kernel: QLogic iSCSI HBA Driver Mar 14 00:43:17.540910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:43:17.572045 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:43:17.620962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:43:17.621041 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:43:17.625790 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:43:17.726910 kernel: raid6: avx2x4 gen() 22880 MB/s Mar 14 00:43:17.745775 kernel: raid6: avx2x2 gen() 21618 MB/s Mar 14 00:43:17.767877 kernel: raid6: avx2x1 gen() 10296 MB/s Mar 14 00:43:17.768187 kernel: raid6: using algorithm avx2x4 gen() 22880 MB/s Mar 14 00:43:17.791668 kernel: raid6: .... xor() 4301 MB/s, rmw enabled Mar 14 00:43:17.791832 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:43:17.826943 kernel: xor: automatically using best checksumming function avx Mar 14 00:43:18.210137 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:43:18.235390 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:43:18.250164 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:43:18.306974 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 14 00:43:18.315144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:43:18.339047 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:43:18.378846 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 14 00:43:18.447326 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:43:18.473612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:43:18.635129 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:43:18.654846 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:43:18.687175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:43:18.688683 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:43:18.697205 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:43:18.721164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:43:18.736441 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:43:18.778114 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:43:18.778181 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:43:18.789971 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:43:18.808693 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:43:18.808796 kernel: AES CTR mode by8 optimization enabled Mar 14 00:43:18.810166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:43:18.810727 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:43:18.846392 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:43:18.847035 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:43:18.847055 kernel: GPT:9289727 != 19775487 Mar 14 00:43:18.847069 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:43:18.847084 kernel: GPT:9289727 != 19775487 Mar 14 00:43:18.847098 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:43:18.866861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:43:18.869851 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:43:18.884378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:43:18.884981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:43:18.896716 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:43:19.174939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:43:19.195018 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (474) Mar 14 00:43:19.195075 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Mar 14 00:43:19.212703 kernel: libata version 3.00 loaded. Mar 14 00:43:19.227687 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:43:19.243666 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:43:19.247877 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:43:19.277976 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:43:19.278238 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:43:19.266842 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:43:19.292320 kernel: scsi host0: ahci Mar 14 00:43:19.292875 kernel: scsi host1: ahci Mar 14 00:43:19.296748 kernel: scsi host2: ahci Mar 14 00:43:19.297896 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:43:19.316705 kernel: scsi host3: ahci Mar 14 00:43:19.317074 kernel: scsi host4: ahci Mar 14 00:43:19.317323 kernel: scsi host5: ahci Mar 14 00:43:19.317697 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 14 00:43:19.317716 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 14 00:43:19.317730 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 14 00:43:19.317757 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 14 00:43:19.317772 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 14 00:43:19.317787 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 14 00:43:19.314679 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:43:19.688618 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:43:19.688708 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:43:19.688749 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:43:19.688768 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:43:19.688783 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:43:19.688798 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:43:19.688816 kernel: ata3.00: applying bridge limits Mar 14 00:43:19.688834 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:43:19.688852 kernel: ata3.00: configured for UDMA/100 Mar 14 00:43:19.688870 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:43:19.668091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:43:19.707030 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:43:19.727859 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:43:19.739954 disk-uuid[563]: Primary Header is updated. Mar 14 00:43:19.739954 disk-uuid[563]: Secondary Entries is updated. Mar 14 00:43:19.739954 disk-uuid[563]: Secondary Header is updated. Mar 14 00:43:19.752706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:43:19.753117 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:43:19.765741 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:43:19.766226 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:43:19.773597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:43:19.785150 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:43:19.785602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:43:19.791731 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:43:20.786570 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:43:20.788007 disk-uuid[564]: The operation has completed successfully. Mar 14 00:43:20.839620 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:43:20.839850 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:43:20.877140 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:43:20.886068 sh[591]: Success Mar 14 00:43:20.906600 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:43:21.070588 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:43:21.095151 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:43:21.101994 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:43:21.212131 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:43:21.212205 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:43:21.212225 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:43:21.219976 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:43:21.220065 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:43:21.260810 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:43:21.268386 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:43:21.299091 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:43:21.305405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:43:21.348858 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:43:21.348948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:43:21.348966 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:43:21.383856 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:43:21.406848 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:43:21.417000 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:43:21.433865 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:43:21.451268 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:43:21.888613 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:43:21.946339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:43:21.979394 ignition[683]: Ignition 2.19.0 Mar 14 00:43:21.979426 ignition[683]: Stage: fetch-offline Mar 14 00:43:21.979628 ignition[683]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:43:21.979680 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:43:21.981878 ignition[683]: parsed url from cmdline: "" Mar 14 00:43:21.981885 ignition[683]: no config URL provided Mar 14 00:43:22.030736 kernel: hrtimer: interrupt took 3339665 ns Mar 14 00:43:21.981894 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:43:21.981916 ignition[683]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:43:21.981993 ignition[683]: op(1): [started] loading QEMU firmware config module Mar 14 00:43:21.982001 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:43:22.177010 ignition[683]: op(1): [finished] loading QEMU firmware config module Mar 14 00:43:22.219831 systemd-networkd[779]: lo: Link UP Mar 14 00:43:22.219861 systemd-networkd[779]: lo: Gained carrier Mar 14 00:43:22.227032 systemd-networkd[779]: Enumeration completed Mar 14 00:43:22.229063 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:43:22.239796 systemd[1]: Reached target network.target - Network. Mar 14 00:43:22.247144 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:43:22.247172 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:43:22.279830 systemd-networkd[779]: eth0: Link UP Mar 14 00:43:22.279840 systemd-networkd[779]: eth0: Gained carrier Mar 14 00:43:22.279853 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:43:22.312602 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.158/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:43:22.488133 ignition[683]: parsing config with SHA512: 23f0f7763e430c4ffa21b3487a40a01a778b791f4c9c1713b58166a1eb8f94992acf5a236302a76ae21bcfdd94981a7d0498a4b6cc5e13a46010efe47b7b1bcc Mar 14 00:43:22.507308 unknown[683]: fetched base config from "system" Mar 14 00:43:22.507461 unknown[683]: fetched user config from "qemu" Mar 14 00:43:22.510234 ignition[683]: fetch-offline: fetch-offline passed Mar 14 00:43:22.513774 systemd-resolved[225]: Detected conflict on linux IN A 10.0.0.158 Mar 14 00:43:22.510472 ignition[683]: Ignition finished successfully Mar 14 00:43:22.513784 systemd-resolved[225]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Mar 14 00:43:22.513909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:43:22.523829 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:43:22.538859 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:43:22.592415 ignition[784]: Ignition 2.19.0 Mar 14 00:43:22.592435 ignition[784]: Stage: kargs Mar 14 00:43:22.592815 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:43:22.599286 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:43:22.592831 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:43:22.594225 ignition[784]: kargs: kargs passed Mar 14 00:43:22.594276 ignition[784]: Ignition finished successfully Mar 14 00:43:22.625864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:43:22.653115 ignition[792]: Ignition 2.19.0 Mar 14 00:43:22.653175 ignition[792]: Stage: disks Mar 14 00:43:22.653871 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:43:22.653885 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:43:22.657935 ignition[792]: disks: disks passed Mar 14 00:43:22.658003 ignition[792]: Ignition finished successfully Mar 14 00:43:22.673403 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:43:22.673990 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:43:22.683345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:43:22.691324 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:43:22.699571 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:43:22.699727 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:43:22.718901 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:43:22.751889 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:43:22.759322 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:43:22.776829 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:43:22.934822 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:43:22.935559 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:43:22.940008 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:43:22.963960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:43:22.972271 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:43:22.972830 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:43:22.972877 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:43:22.999906 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Mar 14 00:43:22.972902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:43:23.014819 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:43:23.014890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:43:23.014906 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:43:23.021684 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:43:23.022743 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:43:23.023097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:43:23.045892 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:43:23.099457 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:43:23.110116 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:43:23.120095 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:43:23.127164 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:43:23.276244 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:43:23.290767 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:43:23.299702 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:43:23.311781 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:43:23.313218 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:43:23.337826 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:43:23.346166 ignition[922]: INFO : Ignition 2.19.0 Mar 14 00:43:23.346166 ignition[922]: INFO : Stage: mount Mar 14 00:43:23.346166 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:43:23.346166 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:43:23.346166 ignition[922]: INFO : mount: mount passed Mar 14 00:43:23.346166 ignition[922]: INFO : Ignition finished successfully Mar 14 00:43:23.370318 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:43:23.395747 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:43:24.005039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:43:24.192202 systemd-networkd[779]: eth0: Gained IPv6LL Mar 14 00:43:24.228616 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Mar 14 00:43:24.236566 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:43:24.236677 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:43:24.236705 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:43:24.265562 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:43:24.268852 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:43:24.325169 ignition[954]: INFO : Ignition 2.19.0 Mar 14 00:43:24.325169 ignition[954]: INFO : Stage: files Mar 14 00:43:24.331607 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:43:24.331607 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:43:24.331607 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:43:24.331607 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:43:24.331607 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:43:24.373758 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:43:24.373758 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:43:24.373758 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:43:24.373758 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:43:24.373758 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:43:24.337213 unknown[954]: wrote ssh authorized keys file for user: core Mar 14 00:43:24.417234 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:43:24.575193 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:43:24.575193 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:43:24.586554 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:43:24.591446 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:43:24.597272 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:43:24.597272 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:43:24.608569 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:43:24.608569 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:43:24.618975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:43:24.625026 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:43:24.632581 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:43:24.632581 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:43:24.648924 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:43:24.648924 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:43:24.670856 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 14 00:43:24.968476 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:43:26.498463 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:43:26.498463 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:43:26.510118 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:43:26.518735 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:43:26.518735 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:43:26.518735 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:43:26.536218 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:43:26.542615 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:43:26.542615 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:43:26.553414 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:43:26.589853 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:43:26.596468 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:43:26.602707 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:43:26.602707 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:43:26.602707 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:43:26.602707 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:43:26.602707 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:43:26.602707 ignition[954]: INFO : files: files passed Mar 14 00:43:26.602707 ignition[954]: INFO : Ignition finished successfully Mar 14 00:43:26.641163 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:43:26.659788 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:43:26.666742 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:43:26.684850 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:43:26.685055 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:43:26.697715 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:43:26.703012 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:43:26.710373 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:43:26.716723 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:43:26.723154 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:43:26.734171 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:43:26.759948 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:43:26.806178 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:43:26.806368 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:43:26.815329 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:43:26.819968 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:43:26.826217 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:43:26.839896 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:43:26.860010 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:43:26.880950 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:43:26.891908 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:43:26.896099 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:43:26.904380 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:43:26.911298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:43:26.911623 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:43:26.919719 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:43:26.926586 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:43:26.933717 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:43:26.940742 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:43:26.944432 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:43:26.950838 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:43:26.957417 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:43:26.964582 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:43:26.971310 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:43:26.974689 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:43:26.980307 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:43:26.980450 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:43:26.988579 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:43:26.994269 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:43:27.001235 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:43:27.001591 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:43:27.007895 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:43:27.008102 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:43:27.014694 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:43:27.014882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:43:27.021804 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:43:27.027394 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:43:27.031831 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:43:27.037185 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:43:27.099291 ignition[1009]: INFO : Ignition 2.19.0 Mar 14 00:43:27.099291 ignition[1009]: INFO : Stage: umount Mar 14 00:43:27.099291 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:43:27.099291 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:43:27.099291 ignition[1009]: INFO : umount: umount passed Mar 14 00:43:27.099291 ignition[1009]: INFO : Ignition finished successfully Mar 14 00:43:27.037369 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:43:27.038447 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:43:27.038598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:43:27.038946 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:43:27.039060 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:43:27.039423 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:43:27.039618 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:43:27.040101 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:43:27.040230 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:43:27.063891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:43:27.068922 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:43:27.073443 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:43:27.073736 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:43:27.080835 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:43:27.081347 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:43:27.102758 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:43:27.103588 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:43:27.103743 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:43:27.110227 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:43:27.110374 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:43:27.114916 systemd[1]: Stopped target network.target - Network. Mar 14 00:43:27.118232 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:43:27.118379 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:43:27.124189 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:43:27.124289 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:43:27.131938 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:43:27.132103 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:43:27.134846 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:43:27.134977 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:43:27.140199 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:43:27.140298 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:43:27.146250 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:43:27.155069 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:43:27.170283 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:43:27.170456 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:43:27.178730 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:43:27.178863 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:43:27.184598 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 14 00:43:27.191725 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:43:27.191968 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:43:27.199461 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:43:27.199931 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:43:27.203743 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:43:27.203854 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:43:27.228853 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:43:27.236932 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:43:27.237022 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:43:27.240166 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:43:27.240226 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:43:27.250143 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:43:27.250203 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:43:27.253437 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:43:27.279921 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:43:27.280212 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:43:27.288149 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:43:27.288232 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:43:27.294536 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:43:27.294600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:43:27.301287 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:43:27.301375 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:43:27.310368 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:43:27.310453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:43:27.313049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:43:27.313126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:43:27.329137 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:43:27.330094 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:43:27.330178 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:43:27.335836 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:43:27.335920 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:43:27.342965 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:43:27.343048 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:43:27.347646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:43:27.347750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:43:27.360876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:43:27.361027 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:43:27.471744 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:43:27.471959 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:43:27.481061 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:43:27.496704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:43:27.512616 systemd[1]: Switching root. Mar 14 00:43:27.549698 systemd-journald[195]: Journal stopped Mar 14 00:43:28.987570 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 14 00:43:28.987703 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:43:28.987737 kernel: SELinux: policy capability open_perms=1 Mar 14 00:43:28.987758 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:43:28.987777 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:43:28.987796 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:43:28.987865 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:43:28.987886 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:43:28.987916 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:43:28.987936 kernel: audit: type=1403 audit(1773449007.804:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:43:28.987957 systemd[1]: Successfully loaded SELinux policy in 58.258ms. Mar 14 00:43:28.988004 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.638ms. Mar 14 00:43:28.988026 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:43:28.988047 systemd[1]: Detected virtualization kvm. Mar 14 00:43:28.988068 systemd[1]: Detected architecture x86-64. Mar 14 00:43:28.988123 systemd[1]: Detected first boot. Mar 14 00:43:28.988145 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:43:28.988165 zram_generator::config[1056]: No configuration found. Mar 14 00:43:28.988195 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:43:28.988215 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:43:28.988236 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:43:28.988257 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:43:28.988278 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:43:28.988333 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:43:28.988355 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:43:28.988376 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:43:28.988396 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:43:28.988417 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:43:28.988437 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:43:28.988460 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:43:28.988533 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:43:28.988589 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:43:28.988614 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:43:28.988634 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:43:28.988656 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:43:28.988713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:43:28.988735 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:43:28.988754 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:43:28.988775 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:43:28.988795 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:43:28.988848 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:43:28.988870 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:43:28.988890 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:43:28.988910 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:43:28.988931 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:43:28.988952 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:43:28.988971 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:43:28.988991 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:43:28.989042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:43:28.989065 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:43:28.989089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:43:28.989109 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:43:28.989129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:43:28.989150 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:43:28.989170 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:43:28.989191 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:43:28.989211 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:43:28.989260 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:43:28.989283 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:43:28.989304 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:43:28.989324 systemd[1]: Reached target machines.target - Containers. Mar 14 00:43:28.989345 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:43:28.989366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:43:28.989386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:43:28.989406 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:43:28.989456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:43:28.989478 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:43:28.989571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:43:28.989593 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:43:28.989614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:43:28.989636 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:43:28.989692 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:43:28.989716 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:43:28.989736 kernel: fuse: init (API version 7.39) Mar 14 00:43:28.989789 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:43:28.989810 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:43:28.989830 kernel: ACPI: bus type drm_connector registered Mar 14 00:43:28.989850 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:43:28.989871 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:43:28.989891 kernel: loop: module loaded Mar 14 00:43:28.989910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:43:28.989929 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:43:28.989950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:43:28.990034 systemd-journald[1140]: Collecting audit messages is disabled. Mar 14 00:43:28.990072 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:43:28.990093 systemd[1]: Stopped verity-setup.service. Mar 14 00:43:28.990113 systemd-journald[1140]: Journal started Mar 14 00:43:28.990147 systemd-journald[1140]: Runtime Journal (/run/log/journal/fa4a62fd6a6340aca48aa21ee3cc7b17) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:43:29.000632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:43:28.480730 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:43:28.498155 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:43:28.498966 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:43:28.499349 systemd[1]: systemd-journald.service: Consumed 1.724s CPU time. Mar 14 00:43:29.004575 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:43:29.009740 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:43:29.013139 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:43:29.016415 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:43:29.019826 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:43:29.024116 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:43:29.028392 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:43:29.032184 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:43:29.036780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:43:29.041458 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:43:29.041824 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:43:29.046170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:43:29.046759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:43:29.052295 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:43:29.052694 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:43:29.056179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:43:29.056437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:43:29.060980 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:43:29.061216 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:43:29.065202 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:43:29.065469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:43:29.070309 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:43:29.074198 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:43:29.078307 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:43:29.098535 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:43:29.119784 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:43:29.124880 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:43:29.128337 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:43:29.128440 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:43:29.133305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:43:29.139643 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:43:29.144974 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:43:29.148132 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:43:29.153227 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:43:29.159645 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:43:29.164580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:43:29.167202 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:43:29.171615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:43:29.174790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:43:29.180317 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:43:29.186616 systemd-journald[1140]: Time spent on flushing to /var/log/journal/fa4a62fd6a6340aca48aa21ee3cc7b17 is 18.685ms for 946 entries. Mar 14 00:43:29.186616 systemd-journald[1140]: System Journal (/var/log/journal/fa4a62fd6a6340aca48aa21ee3cc7b17) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:43:29.231964 systemd-journald[1140]: Received client request to flush runtime journal. Mar 14 00:43:29.232019 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:43:29.186819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:43:29.197090 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:43:29.203228 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:43:29.203606 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:43:29.214645 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:43:29.222227 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:43:29.231254 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:43:29.247841 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:43:29.258211 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:43:29.263480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:43:29.272565 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:43:29.274276 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 14 00:43:29.274305 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 14 00:43:29.282132 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:43:29.289009 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:43:29.305912 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:43:29.310764 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:43:29.312105 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:43:29.316314 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:43:29.329649 kernel: loop1: detected capacity change from 0 to 219192 Mar 14 00:43:29.356643 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:43:29.370803 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:43:29.381785 kernel: loop2: detected capacity change from 0 to 142488 Mar 14 00:43:29.402396 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 14 00:43:29.402450 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 14 00:43:29.408839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:43:29.431572 kernel: loop3: detected capacity change from 0 to 140768 Mar 14 00:43:29.458008 kernel: loop4: detected capacity change from 0 to 219192 Mar 14 00:43:29.476599 kernel: loop5: detected capacity change from 0 to 142488 Mar 14 00:43:29.498938 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:43:29.500036 (sd-merge)[1197]: Merged extensions into '/usr'. Mar 14 00:43:29.508256 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:43:29.508418 systemd[1]: Reloading... Mar 14 00:43:29.582551 zram_generator::config[1221]: No configuration found. Mar 14 00:43:29.677168 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:43:29.775845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:43:29.829401 systemd[1]: Reloading finished in 320 ms. Mar 14 00:43:29.870170 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:43:29.875066 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:43:29.881011 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:43:29.908041 systemd[1]: Starting ensure-sysext.service... Mar 14 00:43:29.913368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:43:29.920988 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:43:29.928244 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:43:29.928283 systemd[1]: Reloading... Mar 14 00:43:29.941177 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:43:29.941621 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:43:29.942785 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:43:29.943211 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 14 00:43:29.943339 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 14 00:43:29.948736 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:43:29.948767 systemd-tmpfiles[1262]: Skipping /boot Mar 14 00:43:29.961977 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:43:29.962005 systemd-tmpfiles[1262]: Skipping /boot Mar 14 00:43:29.993586 zram_generator::config[1286]: No configuration found. Mar 14 00:43:29.994451 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Mar 14 00:43:30.111545 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1303) Mar 14 00:43:30.160574 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:43:30.166581 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:43:30.194637 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:43:30.252355 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:43:30.267606 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:43:30.300638 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:43:30.302850 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:43:30.389560 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:43:30.412464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:43:30.414275 kernel: kvm_amd: TSC scaling supported Mar 14 00:43:30.414372 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:43:30.414402 kernel: kvm_amd: Nested Paging enabled Mar 14 00:43:30.414421 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:43:30.421147 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:43:30.431820 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:43:30.433302 systemd[1]: Reloading finished in 504 ms. Mar 14 00:43:30.480881 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:43:30.486048 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:43:30.497577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:43:30.526265 systemd[1]: Finished ensure-sysext.service. Mar 14 00:43:30.529849 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:43:30.567406 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:43:30.584943 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:43:30.592956 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:43:30.598062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:43:30.600885 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:43:30.614741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:43:30.617627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:43:30.630419 lvm[1368]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:43:30.630834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:43:30.644959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:43:30.652378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:43:30.654404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:43:30.661580 augenrules[1382]: No rules Mar 14 00:43:30.663466 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:43:30.673856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:43:30.682154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:43:30.690328 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:43:30.698145 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:43:30.709102 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:43:30.713077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:43:30.714653 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:43:30.721252 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:43:30.728895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:43:30.729231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:43:30.733724 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:43:30.740792 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:43:30.741127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:43:30.747233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:43:30.747646 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:43:30.754550 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:43:30.754962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:43:30.760978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:43:30.767211 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:43:30.785322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:43:30.807950 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:43:30.813106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:43:30.813268 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:43:30.815749 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:43:30.816763 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:43:30.981005 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:43:30.985008 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:43:30.986601 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:43:30.992208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:43:30.997357 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:43:31.002547 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:43:31.042745 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:43:31.138937 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:43:31.140426 systemd-resolved[1390]: Positive Trust Anchors: Mar 14 00:43:31.140445 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:43:31.140564 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:43:31.147093 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:43:31.147887 systemd-networkd[1388]: lo: Link UP Mar 14 00:43:31.147895 systemd-networkd[1388]: lo: Gained carrier Mar 14 00:43:31.149442 systemd-resolved[1390]: Defaulting to hostname 'linux'. Mar 14 00:43:31.151003 systemd-networkd[1388]: Enumeration completed Mar 14 00:43:31.151231 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:43:31.152388 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:43:31.152468 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:43:31.154226 systemd-networkd[1388]: eth0: Link UP Mar 14 00:43:31.154384 systemd-networkd[1388]: eth0: Gained carrier Mar 14 00:43:31.154446 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:43:31.155334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:43:31.159446 systemd[1]: Reached target network.target - Network. Mar 14 00:43:31.162381 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:43:31.166065 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:43:31.170058 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:43:31.175267 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:43:31.179582 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:43:31.183022 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:43:31.187551 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:43:31.187653 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.158/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:43:31.188928 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Mar 14 00:43:31.190043 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:43:31.190118 systemd-timesyncd[1391]: Initial clock synchronization to Sat 2026-03-14 00:43:30.883927 UTC. Mar 14 00:43:31.193036 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:43:31.193107 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:43:31.196992 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:43:31.202276 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:43:31.209656 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:43:31.220908 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:43:31.232273 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:43:31.238062 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:43:31.242828 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:43:31.246970 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:43:31.251199 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:43:31.251265 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:43:31.253112 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:43:31.261396 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:43:31.270317 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:43:31.278754 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:43:31.283419 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:43:31.285548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:43:31.295731 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:43:31.305282 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:43:31.320401 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:43:31.343891 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:43:31.371085 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:43:31.377772 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:43:31.395861 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:43:31.538977 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:43:31.608995 update_engine[1437]: I20260314 00:43:31.505575 1437 main.cc:92] Flatcar Update Engine starting Mar 14 00:43:31.630464 dbus-daemon[1426]: [system] SELinux support is enabled Mar 14 00:43:31.633858 update_engine[1437]: I20260314 00:43:31.633800 1437 update_check_scheduler.cc:74] Next update check in 10m29s Mar 14 00:43:31.672146 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:43:31.804847 jq[1440]: true Mar 14 00:43:31.806787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:43:31.807069 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:43:31.816429 jq[1427]: false Mar 14 00:43:31.823418 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:43:31.823866 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:43:31.841913 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:43:31.842840 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:43:31.887330 jq[1444]: true Mar 14 00:43:31.894821 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:43:31.895999 extend-filesystems[1428]: Found loop3 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found loop4 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found loop5 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found sr0 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found vda Mar 14 00:43:31.895999 extend-filesystems[1428]: Found vda1 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found vda2 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found vda3 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found usr Mar 14 00:43:31.895999 extend-filesystems[1428]: Found vda4 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found vda6 Mar 14 00:43:31.895999 extend-filesystems[1428]: Found vda7 Mar 14 00:43:31.894866 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:43:32.124350 tar[1443]: linux-amd64/LICENSE Mar 14 00:43:32.124350 tar[1443]: linux-amd64/helm Mar 14 00:43:32.125737 extend-filesystems[1428]: Found vda9 Mar 14 00:43:32.125737 extend-filesystems[1428]: Checking size of /dev/vda9 Mar 14 00:43:31.909170 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:43:31.910995 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:43:32.057660 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:43:32.078785 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:43:32.089794 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:43:32.140583 extend-filesystems[1428]: Resized partition /dev/vda9 Mar 14 00:43:32.174959 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:43:32.192539 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:43:32.212444 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:43:32.212659 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:43:32.213581 systemd-logind[1436]: New seat seat0. Mar 14 00:43:32.215608 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:43:32.231947 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:43:32.263595 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1329) Mar 14 00:43:32.272025 systemd-networkd[1388]: eth0: Gained IPv6LL Mar 14 00:43:32.278287 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:43:32.292336 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:43:32.320637 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:43:32.321163 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:43:32.330628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:43:32.358874 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:43:32.388022 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:43:32.388022 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:43:32.388022 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:43:32.412442 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Mar 14 00:43:32.403274 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:43:32.418963 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:43:32.403719 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:43:32.429762 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:43:32.459824 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:43:32.492052 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:43:32.499086 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:43:32.509074 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:43:32.509738 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:43:32.517414 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:43:32.910136 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:43:32.984598 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:43:33.006610 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:43:33.006987 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:43:33.044046 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:43:33.150773 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:43:33.168238 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:43:33.187000 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:43:33.193192 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:43:33.476934 containerd[1445]: time="2026-03-14T00:43:33.476655853Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:43:33.579558 containerd[1445]: time="2026-03-14T00:43:33.578994080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:43:33.588510 containerd[1445]: time="2026-03-14T00:43:33.588108036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:43:33.588510 containerd[1445]: time="2026-03-14T00:43:33.588376943Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:43:33.588830 containerd[1445]: time="2026-03-14T00:43:33.588612917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:43:33.589314 containerd[1445]: time="2026-03-14T00:43:33.589248958Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:43:33.589314 containerd[1445]: time="2026-03-14T00:43:33.589290878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:43:33.590064 containerd[1445]: time="2026-03-14T00:43:33.589951293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:43:33.590064 containerd[1445]: time="2026-03-14T00:43:33.590034482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:43:33.590852 containerd[1445]: time="2026-03-14T00:43:33.590748532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:43:33.590852 containerd[1445]: time="2026-03-14T00:43:33.590836138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:43:33.590927 containerd[1445]: time="2026-03-14T00:43:33.590865004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:43:33.590927 containerd[1445]: time="2026-03-14T00:43:33.590882522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:43:33.601625 containerd[1445]: time="2026-03-14T00:43:33.599710441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:43:33.603719 containerd[1445]: time="2026-03-14T00:43:33.603588423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:43:33.603919 containerd[1445]: time="2026-03-14T00:43:33.603851626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:43:33.603919 containerd[1445]: time="2026-03-14T00:43:33.603893982Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:43:33.604295 containerd[1445]: time="2026-03-14T00:43:33.604229684Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:43:33.604547 containerd[1445]: time="2026-03-14T00:43:33.604422412Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:43:33.622767 containerd[1445]: time="2026-03-14T00:43:33.622168953Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:43:33.625660 containerd[1445]: time="2026-03-14T00:43:33.622891002Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:43:33.625660 containerd[1445]: time="2026-03-14T00:43:33.623018735Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:43:33.625660 containerd[1445]: time="2026-03-14T00:43:33.623045791Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:43:33.625660 containerd[1445]: time="2026-03-14T00:43:33.623067210Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:43:33.627120 containerd[1445]: time="2026-03-14T00:43:33.626973187Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:43:33.630711 containerd[1445]: time="2026-03-14T00:43:33.630360910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:43:33.632620 containerd[1445]: time="2026-03-14T00:43:33.632508668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:43:33.632687 containerd[1445]: time="2026-03-14T00:43:33.632642753Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:43:33.632687 containerd[1445]: time="2026-03-14T00:43:33.632659796Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:43:33.632687 containerd[1445]: time="2026-03-14T00:43:33.632675512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.632814 containerd[1445]: time="2026-03-14T00:43:33.632726175Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.632814 containerd[1445]: time="2026-03-14T00:43:33.632778446Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.632814 containerd[1445]: time="2026-03-14T00:43:33.632796893Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.632866 containerd[1445]: time="2026-03-14T00:43:33.632837719Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.632979 containerd[1445]: time="2026-03-14T00:43:33.632925227Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.632979 containerd[1445]: time="2026-03-14T00:43:33.632963359Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.634521 containerd[1445]: time="2026-03-14T00:43:33.634025847Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:43:33.634521 containerd[1445]: time="2026-03-14T00:43:33.634551053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634598685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634612039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634623077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634633207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634720221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634770827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634807797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634818884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634831686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634843181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634853358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634864068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.634886 containerd[1445]: time="2026-03-14T00:43:33.634876520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:43:33.635193 containerd[1445]: time="2026-03-14T00:43:33.635154810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.635193 containerd[1445]: time="2026-03-14T00:43:33.635191026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.635234 containerd[1445]: time="2026-03-14T00:43:33.635203731Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:43:33.635284 containerd[1445]: time="2026-03-14T00:43:33.635255818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:43:33.635404 containerd[1445]: time="2026-03-14T00:43:33.635354180Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:43:33.635404 containerd[1445]: time="2026-03-14T00:43:33.635386871Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:43:33.635404 containerd[1445]: time="2026-03-14T00:43:33.635399412Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:43:33.635617 containerd[1445]: time="2026-03-14T00:43:33.635408233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.635617 containerd[1445]: time="2026-03-14T00:43:33.635418943Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:43:33.635617 containerd[1445]: time="2026-03-14T00:43:33.635428588Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:43:33.635617 containerd[1445]: time="2026-03-14T00:43:33.635536945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:43:33.636238 containerd[1445]: time="2026-03-14T00:43:33.636127872Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:43:33.636849 containerd[1445]: time="2026-03-14T00:43:33.636277790Z" level=info msg="Connect containerd service" Mar 14 00:43:33.636849 containerd[1445]: time="2026-03-14T00:43:33.636585468Z" level=info msg="using legacy CRI server" Mar 14 00:43:33.636849 containerd[1445]: time="2026-03-14T00:43:33.636596178Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:43:33.637349 containerd[1445]: time="2026-03-14T00:43:33.637281130Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:43:33.644330 containerd[1445]: time="2026-03-14T00:43:33.643995028Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:43:33.645009 containerd[1445]: time="2026-03-14T00:43:33.644573241Z" level=info msg="Start subscribing containerd event" Mar 14 00:43:33.645009 containerd[1445]: time="2026-03-14T00:43:33.644879806Z" level=info msg="Start recovering state" Mar 14 00:43:33.645217 containerd[1445]: time="2026-03-14T00:43:33.645151143Z" level=info msg="Start event monitor" Mar 14 00:43:33.645269 containerd[1445]: time="2026-03-14T00:43:33.645260692Z" level=info msg="Start snapshots syncer" Mar 14 00:43:33.645304 containerd[1445]: time="2026-03-14T00:43:33.645272012Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:43:33.645344 containerd[1445]: time="2026-03-14T00:43:33.645310890Z" level=info msg="Start streaming server" Mar 14 00:43:33.646314 containerd[1445]: time="2026-03-14T00:43:33.646251986Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:43:33.646378 containerd[1445]: time="2026-03-14T00:43:33.646332697Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:43:33.646413 containerd[1445]: time="2026-03-14T00:43:33.646385704Z" level=info msg="containerd successfully booted in 0.171757s" Mar 14 00:43:33.807348 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:43:34.332399 tar[1443]: linux-amd64/README.md Mar 14 00:43:34.382120 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:43:35.065722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:43:35.071719 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:43:35.073832 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:43:35.077692 systemd[1]: Startup finished in 4.505s (kernel) + 11.858s (initrd) + 7.327s (userspace) = 23.691s. Mar 14 00:43:35.607020 kubelet[1538]: E0314 00:43:35.606881 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:43:35.610700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:43:35.611021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:43:35.611676 systemd[1]: kubelet.service: Consumed 2.701s CPU time. Mar 14 00:43:36.737854 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:43:36.739602 systemd[1]: Started sshd@0-10.0.0.158:22-10.0.0.1:35272.service - OpenSSH per-connection server daemon (10.0.0.1:35272). Mar 14 00:43:36.797794 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 35272 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:43:36.801167 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:36.814372 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:43:36.825011 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:43:36.828238 systemd-logind[1436]: New session 1 of user core. Mar 14 00:43:36.842805 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:43:36.846934 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:43:36.868299 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:43:37.027612 systemd[1555]: Queued start job for default target default.target. Mar 14 00:43:37.037667 systemd[1555]: Created slice app.slice - User Application Slice. Mar 14 00:43:37.037736 systemd[1555]: Reached target paths.target - Paths. Mar 14 00:43:37.037756 systemd[1555]: Reached target timers.target - Timers. Mar 14 00:43:37.040706 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:43:37.059133 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:43:37.059370 systemd[1555]: Reached target sockets.target - Sockets. Mar 14 00:43:37.059421 systemd[1555]: Reached target basic.target - Basic System. Mar 14 00:43:37.059559 systemd[1555]: Reached target default.target - Main User Target. Mar 14 00:43:37.059635 systemd[1555]: Startup finished in 179ms. Mar 14 00:43:37.059820 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:43:37.062065 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:43:37.127128 systemd[1]: Started sshd@1-10.0.0.158:22-10.0.0.1:35280.service - OpenSSH per-connection server daemon (10.0.0.1:35280). Mar 14 00:43:37.171052 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 35280 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:43:37.173624 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:37.180242 systemd-logind[1436]: New session 2 of user core. Mar 14 00:43:37.189673 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:43:37.248615 sshd[1566]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:37.260533 systemd[1]: sshd@1-10.0.0.158:22-10.0.0.1:35280.service: Deactivated successfully. Mar 14 00:43:37.262653 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:43:37.266067 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:43:37.266565 systemd[1]: Started sshd@2-10.0.0.158:22-10.0.0.1:35294.service - OpenSSH per-connection server daemon (10.0.0.1:35294). Mar 14 00:43:37.269209 systemd-logind[1436]: Removed session 2. Mar 14 00:43:37.310899 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 35294 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:43:37.313146 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:37.319216 systemd-logind[1436]: New session 3 of user core. Mar 14 00:43:37.328759 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:43:37.382209 sshd[1573]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:37.391385 systemd[1]: sshd@2-10.0.0.158:22-10.0.0.1:35294.service: Deactivated successfully. Mar 14 00:43:37.393130 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:43:37.395208 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:43:37.409393 systemd[1]: Started sshd@3-10.0.0.158:22-10.0.0.1:35296.service - OpenSSH per-connection server daemon (10.0.0.1:35296). Mar 14 00:43:37.411088 systemd-logind[1436]: Removed session 3. Mar 14 00:43:37.450035 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 35296 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:43:37.452599 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:37.459343 systemd-logind[1436]: New session 4 of user core. Mar 14 00:43:37.468747 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:43:37.533659 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:37.550901 systemd[1]: sshd@3-10.0.0.158:22-10.0.0.1:35296.service: Deactivated successfully. Mar 14 00:43:37.553261 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:43:37.555324 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:43:37.565589 systemd[1]: Started sshd@4-10.0.0.158:22-10.0.0.1:35304.service - OpenSSH per-connection server daemon (10.0.0.1:35304). Mar 14 00:43:37.568302 systemd-logind[1436]: Removed session 4. Mar 14 00:43:37.600140 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 35304 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:43:37.602276 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:37.608664 systemd-logind[1436]: New session 5 of user core. Mar 14 00:43:37.622767 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:43:37.691139 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:43:37.691725 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:43:37.711708 sudo[1590]: pam_unix(sudo:session): session closed for user root Mar 14 00:43:37.714273 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:37.729628 systemd[1]: sshd@4-10.0.0.158:22-10.0.0.1:35304.service: Deactivated successfully. Mar 14 00:43:37.731595 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:43:37.733284 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:43:37.735328 systemd[1]: Started sshd@5-10.0.0.158:22-10.0.0.1:35308.service - OpenSSH per-connection server daemon (10.0.0.1:35308). Mar 14 00:43:37.736461 systemd-logind[1436]: Removed session 5. Mar 14 00:43:37.774948 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 35308 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:43:37.777357 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:37.784552 systemd-logind[1436]: New session 6 of user core. Mar 14 00:43:37.798929 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:43:37.860293 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:43:37.860797 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:43:37.867677 sudo[1599]: pam_unix(sudo:session): session closed for user root Mar 14 00:43:37.876540 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:43:37.876897 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:43:37.896540 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:43:37.902704 auditctl[1602]: No rules Mar 14 00:43:37.903283 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:43:37.903730 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:43:37.907010 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:43:37.963997 augenrules[1620]: No rules Mar 14 00:43:37.966578 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:43:37.968359 sudo[1598]: pam_unix(sudo:session): session closed for user root Mar 14 00:43:37.971111 sshd[1595]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:37.986093 systemd[1]: sshd@5-10.0.0.158:22-10.0.0.1:35308.service: Deactivated successfully. Mar 14 00:43:37.988747 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:43:37.995200 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:43:38.003929 systemd[1]: Started sshd@6-10.0.0.158:22-10.0.0.1:35322.service - OpenSSH per-connection server daemon (10.0.0.1:35322). Mar 14 00:43:38.005542 systemd-logind[1436]: Removed session 6. Mar 14 00:43:38.046352 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 35322 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:43:38.046918 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:38.055219 systemd-logind[1436]: New session 7 of user core. Mar 14 00:43:38.062935 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:43:38.124283 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:43:38.124922 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:43:38.514105 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:43:38.514178 (dockerd)[1651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:43:38.880774 dockerd[1651]: time="2026-03-14T00:43:38.880638738Z" level=info msg="Starting up" Mar 14 00:43:39.124337 dockerd[1651]: time="2026-03-14T00:43:39.124188228Z" level=info msg="Loading containers: start." Mar 14 00:43:39.389564 kernel: Initializing XFRM netlink socket Mar 14 00:43:39.565113 systemd-networkd[1388]: docker0: Link UP Mar 14 00:43:39.602284 dockerd[1651]: time="2026-03-14T00:43:39.602163469Z" level=info msg="Loading containers: done." Mar 14 00:43:39.623860 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1645655702-merged.mount: Deactivated successfully. Mar 14 00:43:39.628367 dockerd[1651]: time="2026-03-14T00:43:39.628042598Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:43:39.628367 dockerd[1651]: time="2026-03-14T00:43:39.628356881Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:43:39.628754 dockerd[1651]: time="2026-03-14T00:43:39.628605382Z" level=info msg="Daemon has completed initialization" Mar 14 00:43:39.693166 dockerd[1651]: time="2026-03-14T00:43:39.692926015Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:43:39.693165 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:43:40.302271 containerd[1445]: time="2026-03-14T00:43:40.302200023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 14 00:43:40.937082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314297957.mount: Deactivated successfully. Mar 14 00:43:42.402252 containerd[1445]: time="2026-03-14T00:43:42.402152933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:42.405152 containerd[1445]: time="2026-03-14T00:43:42.403872275Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 14 00:43:42.407045 containerd[1445]: time="2026-03-14T00:43:42.406743597Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:42.410815 containerd[1445]: time="2026-03-14T00:43:42.410659602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:42.412698 containerd[1445]: time="2026-03-14T00:43:42.412655248Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.110376729s" Mar 14 00:43:42.412698 containerd[1445]: time="2026-03-14T00:43:42.412686669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 14 00:43:42.413815 containerd[1445]: time="2026-03-14T00:43:42.413541816Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 14 00:43:44.104698 containerd[1445]: time="2026-03-14T00:43:44.104602515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:44.105662 containerd[1445]: time="2026-03-14T00:43:44.105596394Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 14 00:43:44.107009 containerd[1445]: time="2026-03-14T00:43:44.106948370Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:44.110715 containerd[1445]: time="2026-03-14T00:43:44.110665371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:44.112124 containerd[1445]: time="2026-03-14T00:43:44.112044707Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.698459407s" Mar 14 00:43:44.112124 containerd[1445]: time="2026-03-14T00:43:44.112111254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 14 00:43:44.113964 containerd[1445]: time="2026-03-14T00:43:44.113704524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 14 00:43:45.298122 containerd[1445]: time="2026-03-14T00:43:45.298027966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:45.298966 containerd[1445]: time="2026-03-14T00:43:45.298909034Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 14 00:43:45.300185 containerd[1445]: time="2026-03-14T00:43:45.300112804Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:45.303369 containerd[1445]: time="2026-03-14T00:43:45.303294737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:45.305191 containerd[1445]: time="2026-03-14T00:43:45.305149772Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.191410614s" Mar 14 00:43:45.305282 containerd[1445]: time="2026-03-14T00:43:45.305193647Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 14 00:43:45.305985 containerd[1445]: time="2026-03-14T00:43:45.305908857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 14 00:43:45.849122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:43:45.871897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:43:46.124731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:43:46.127243 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:43:46.189864 kubelet[1876]: E0314 00:43:46.189651 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:43:46.195797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:43:46.195990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:43:46.570017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310669297.mount: Deactivated successfully. Mar 14 00:43:46.830306 containerd[1445]: time="2026-03-14T00:43:46.830102101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:46.831340 containerd[1445]: time="2026-03-14T00:43:46.831253803Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 14 00:43:46.832954 containerd[1445]: time="2026-03-14T00:43:46.832886518Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:46.835433 containerd[1445]: time="2026-03-14T00:43:46.835303672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:46.836015 containerd[1445]: time="2026-03-14T00:43:46.835955191Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.530014412s" Mar 14 00:43:46.836015 containerd[1445]: time="2026-03-14T00:43:46.836005538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 14 00:43:46.836824 containerd[1445]: time="2026-03-14T00:43:46.836628665Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 14 00:43:48.133748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188924986.mount: Deactivated successfully. Mar 14 00:43:49.823218 containerd[1445]: time="2026-03-14T00:43:49.822951612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:49.824241 containerd[1445]: time="2026-03-14T00:43:49.824088455Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 14 00:43:49.825706 containerd[1445]: time="2026-03-14T00:43:49.825635717Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:49.829207 containerd[1445]: time="2026-03-14T00:43:49.829144366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:49.830779 containerd[1445]: time="2026-03-14T00:43:49.830737488Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.994075851s" Mar 14 00:43:49.830845 containerd[1445]: time="2026-03-14T00:43:49.830780137Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 14 00:43:49.832182 containerd[1445]: time="2026-03-14T00:43:49.832081139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:43:50.217718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476053224.mount: Deactivated successfully. Mar 14 00:43:50.226323 containerd[1445]: time="2026-03-14T00:43:50.226249567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:50.227365 containerd[1445]: time="2026-03-14T00:43:50.227279404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:43:50.228835 containerd[1445]: time="2026-03-14T00:43:50.228753859Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:50.231610 containerd[1445]: time="2026-03-14T00:43:50.231554835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:50.232375 containerd[1445]: time="2026-03-14T00:43:50.232292713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 400.166994ms" Mar 14 00:43:50.232375 containerd[1445]: time="2026-03-14T00:43:50.232351100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:43:50.233012 containerd[1445]: time="2026-03-14T00:43:50.232848236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 14 00:43:50.690863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087146295.mount: Deactivated successfully. Mar 14 00:43:51.553271 containerd[1445]: time="2026-03-14T00:43:51.553196114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:51.554110 containerd[1445]: time="2026-03-14T00:43:51.554052391Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 14 00:43:51.555584 containerd[1445]: time="2026-03-14T00:43:51.555531902Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:51.559292 containerd[1445]: time="2026-03-14T00:43:51.559251758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:43:51.560442 containerd[1445]: time="2026-03-14T00:43:51.560366891Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.327491737s" Mar 14 00:43:51.560442 containerd[1445]: time="2026-03-14T00:43:51.560419710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 14 00:43:53.993115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:43:54.008809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:43:54.037877 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit session-7.scope)... Mar 14 00:43:54.037951 systemd[1]: Reloading... Mar 14 00:43:54.122628 zram_generator::config[2076]: No configuration found. Mar 14 00:43:54.264663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:43:54.346614 systemd[1]: Reloading finished in 307 ms. Mar 14 00:43:54.411309 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:43:54.411424 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:43:54.411814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:43:54.414396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:43:54.587800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:43:54.597912 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:43:54.653845 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:43:54.653845 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:43:54.653845 kubelet[2128]: I0314 00:43:54.653833 2128 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:43:54.983222 kubelet[2128]: I0314 00:43:54.982751 2128 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:43:54.983222 kubelet[2128]: I0314 00:43:54.982791 2128 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:43:54.985017 kubelet[2128]: I0314 00:43:54.984928 2128 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:43:54.985017 kubelet[2128]: I0314 00:43:54.984962 2128 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:43:54.985552 kubelet[2128]: I0314 00:43:54.985458 2128 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:43:55.035002 kubelet[2128]: E0314 00:43:55.034831 2128 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:43:55.035002 kubelet[2128]: I0314 00:43:55.034920 2128 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:43:55.041147 kubelet[2128]: E0314 00:43:55.041065 2128 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:43:55.041147 kubelet[2128]: I0314 00:43:55.041140 2128 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:43:55.049412 kubelet[2128]: I0314 00:43:55.049293 2128 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:43:55.051128 kubelet[2128]: I0314 00:43:55.051043 2128 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:43:55.051272 kubelet[2128]: I0314 00:43:55.051094 2128 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:43:55.051272 kubelet[2128]: I0314 00:43:55.051262 2128 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:43:55.051272 kubelet[2128]: I0314 00:43:55.051273 2128 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:43:55.051718 kubelet[2128]: I0314 00:43:55.051412 2128 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:43:55.054331 kubelet[2128]: I0314 00:43:55.054244 2128 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:43:55.054657 kubelet[2128]: I0314 00:43:55.054618 2128 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:43:55.054796 kubelet[2128]: I0314 00:43:55.054746 2128 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:43:55.054796 kubelet[2128]: I0314 00:43:55.054778 2128 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:43:55.054796 kubelet[2128]: I0314 00:43:55.054797 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:43:55.055665 kubelet[2128]: E0314 00:43:55.055587 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:43:55.055759 kubelet[2128]: E0314 00:43:55.055725 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:43:55.057932 kubelet[2128]: I0314 00:43:55.057339 2128 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:43:55.059781 kubelet[2128]: I0314 00:43:55.058232 2128 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:43:55.059781 kubelet[2128]: I0314 00:43:55.058354 2128 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:43:55.059781 kubelet[2128]: W0314 00:43:55.058427 2128 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:43:55.063818 kubelet[2128]: I0314 00:43:55.063685 2128 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:43:55.063887 kubelet[2128]: I0314 00:43:55.063770 2128 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:43:55.064264 kubelet[2128]: I0314 00:43:55.064247 2128 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:43:55.066173 kubelet[2128]: I0314 00:43:55.064405 2128 server.go:1262] "Started kubelet" Mar 14 00:43:55.066173 kubelet[2128]: I0314 00:43:55.064444 2128 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:43:55.066173 kubelet[2128]: I0314 00:43:55.065313 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:43:55.066173 kubelet[2128]: I0314 00:43:55.065557 2128 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:43:55.067438 kubelet[2128]: I0314 00:43:55.067136 2128 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:43:55.069847 kubelet[2128]: E0314 00:43:55.067360 2128 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.158:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.158:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8e80522e71f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:43:55.062866418 +0000 UTC m=+0.459229297,LastTimestamp:2026-03-14 00:43:55.062866418 +0000 UTC m=+0.459229297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:43:55.072210 kubelet[2128]: E0314 00:43:55.071932 2128 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:43:55.072210 kubelet[2128]: I0314 00:43:55.071966 2128 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:43:55.072397 kubelet[2128]: I0314 00:43:55.072346 2128 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:43:55.072600 kubelet[2128]: I0314 00:43:55.072574 2128 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:43:55.073600 kubelet[2128]: I0314 00:43:55.072821 2128 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:43:55.073600 kubelet[2128]: I0314 00:43:55.072932 2128 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:43:55.073600 kubelet[2128]: E0314 00:43:55.073206 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:43:55.073600 kubelet[2128]: E0314 00:43:55.073362 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.158:6443: connect: connection refused" interval="200ms" Mar 14 00:43:55.073600 kubelet[2128]: E0314 00:43:55.073258 2128 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:43:55.074632 kubelet[2128]: I0314 00:43:55.074594 2128 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:43:55.091798 kubelet[2128]: I0314 00:43:55.091721 2128 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:43:55.091798 kubelet[2128]: I0314 00:43:55.091752 2128 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:43:55.091798 kubelet[2128]: I0314 00:43:55.091767 2128 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:43:55.094678 kubelet[2128]: I0314 00:43:55.094647 2128 policy_none.go:49] "None policy: Start" Mar 14 00:43:55.094678 kubelet[2128]: I0314 00:43:55.094667 2128 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:43:55.094678 kubelet[2128]: I0314 00:43:55.094678 2128 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:43:55.096931 kubelet[2128]: I0314 00:43:55.096850 2128 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:43:55.097978 kubelet[2128]: I0314 00:43:55.097385 2128 policy_none.go:47] "Start" Mar 14 00:43:55.099846 kubelet[2128]: I0314 00:43:55.099770 2128 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:43:55.099846 kubelet[2128]: I0314 00:43:55.099821 2128 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:43:55.099846 kubelet[2128]: I0314 00:43:55.099849 2128 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:43:55.099952 kubelet[2128]: E0314 00:43:55.099887 2128 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:43:55.100626 kubelet[2128]: E0314 00:43:55.100605 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:43:55.103178 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:43:55.114924 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:43:55.119752 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:43:55.131153 kubelet[2128]: E0314 00:43:55.131078 2128 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:43:55.131610 kubelet[2128]: I0314 00:43:55.131566 2128 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:43:55.131655 kubelet[2128]: I0314 00:43:55.131603 2128 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:43:55.131941 kubelet[2128]: I0314 00:43:55.131911 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:43:55.133056 kubelet[2128]: E0314 00:43:55.132975 2128 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:43:55.133056 kubelet[2128]: E0314 00:43:55.133057 2128 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:43:55.215669 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 14 00:43:55.231679 kubelet[2128]: E0314 00:43:55.231634 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:43:55.232723 kubelet[2128]: I0314 00:43:55.232567 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:43:55.232887 kubelet[2128]: E0314 00:43:55.232817 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.158:6443/api/v1/nodes\": dial tcp 10.0.0.158:6443: connect: connection refused" node="localhost" Mar 14 00:43:55.235566 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 14 00:43:55.239360 kubelet[2128]: E0314 00:43:55.239293 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:43:55.241470 systemd[1]: Created slice kubepods-burstable-pod3530d1c24c7f590338c10b7583d25372.slice - libcontainer container kubepods-burstable-pod3530d1c24c7f590338c10b7583d25372.slice. Mar 14 00:43:55.243650 kubelet[2128]: E0314 00:43:55.243585 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:43:55.274345 kubelet[2128]: I0314 00:43:55.274262 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:43:55.274345 kubelet[2128]: I0314 00:43:55.274319 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:43:55.274479 kubelet[2128]: I0314 00:43:55.274372 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3530d1c24c7f590338c10b7583d25372-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3530d1c24c7f590338c10b7583d25372\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:55.274479 kubelet[2128]: I0314 00:43:55.274389 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:43:55.274479 kubelet[2128]: I0314 00:43:55.274437 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:43:55.274479 kubelet[2128]: E0314 00:43:55.274453 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.158:6443: connect: connection refused" interval="400ms" Mar 14 00:43:55.274479 kubelet[2128]: I0314 00:43:55.274471 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:43:55.274675 kubelet[2128]: I0314 00:43:55.274612 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:43:55.274675 kubelet[2128]: I0314 00:43:55.274632 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3530d1c24c7f590338c10b7583d25372-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3530d1c24c7f590338c10b7583d25372\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:55.274675 kubelet[2128]: I0314 00:43:55.274648 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3530d1c24c7f590338c10b7583d25372-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3530d1c24c7f590338c10b7583d25372\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:55.435194 kubelet[2128]: I0314 00:43:55.435071 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:43:55.435918 kubelet[2128]: E0314 00:43:55.435806 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.158:6443/api/v1/nodes\": dial tcp 10.0.0.158:6443: connect: connection refused" node="localhost" Mar 14 00:43:55.536357 kubelet[2128]: E0314 00:43:55.536117 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:55.537795 containerd[1445]: time="2026-03-14T00:43:55.537742015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 14 00:43:55.543022 kubelet[2128]: E0314 00:43:55.542933 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:55.543912 containerd[1445]: time="2026-03-14T00:43:55.543770924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 14 00:43:55.546463 kubelet[2128]: E0314 00:43:55.546319 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:55.546937 containerd[1445]: time="2026-03-14T00:43:55.546898116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3530d1c24c7f590338c10b7583d25372,Namespace:kube-system,Attempt:0,}" Mar 14 00:43:55.675530 kubelet[2128]: E0314 00:43:55.675429 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.158:6443: connect: connection refused" interval="800ms" Mar 14 00:43:55.837436 kubelet[2128]: I0314 00:43:55.837258 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:43:55.837771 kubelet[2128]: E0314 00:43:55.837727 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.158:6443/api/v1/nodes\": dial tcp 10.0.0.158:6443: connect: connection refused" node="localhost" Mar 14 00:43:55.945931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986460732.mount: Deactivated successfully. Mar 14 00:43:55.954102 containerd[1445]: time="2026-03-14T00:43:55.953958917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:43:55.958049 containerd[1445]: time="2026-03-14T00:43:55.958006186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:43:55.959875 containerd[1445]: time="2026-03-14T00:43:55.959655447Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:43:55.961276 containerd[1445]: time="2026-03-14T00:43:55.961193771Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:43:55.963377 containerd[1445]: time="2026-03-14T00:43:55.963320343Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:43:55.963450 containerd[1445]: time="2026-03-14T00:43:55.963401773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:43:55.964408 containerd[1445]: time="2026-03-14T00:43:55.964315233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:43:55.967554 containerd[1445]: time="2026-03-14T00:43:55.967457465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:43:55.969510 containerd[1445]: time="2026-03-14T00:43:55.969430697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.453373ms" Mar 14 00:43:55.971907 containerd[1445]: time="2026-03-14T00:43:55.971798712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 433.908763ms" Mar 14 00:43:55.976258 containerd[1445]: time="2026-03-14T00:43:55.976197764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 432.284017ms" Mar 14 00:43:55.978812 kubelet[2128]: E0314 00:43:55.978763 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:43:56.006558 kubelet[2128]: E0314 00:43:56.005766 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:43:56.008067 kubelet[2128]: E0314 00:43:56.008041 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.158:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:43:56.101563 containerd[1445]: time="2026-03-14T00:43:56.101278486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:43:56.101563 containerd[1445]: time="2026-03-14T00:43:56.101327431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:43:56.101563 containerd[1445]: time="2026-03-14T00:43:56.101337625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:43:56.101563 containerd[1445]: time="2026-03-14T00:43:56.101410057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:43:56.101761 containerd[1445]: time="2026-03-14T00:43:56.101096228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:43:56.101761 containerd[1445]: time="2026-03-14T00:43:56.101142373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:43:56.101761 containerd[1445]: time="2026-03-14T00:43:56.101153877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:43:56.101761 containerd[1445]: time="2026-03-14T00:43:56.101226560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:43:56.107522 containerd[1445]: time="2026-03-14T00:43:56.107167744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:43:56.107522 containerd[1445]: time="2026-03-14T00:43:56.107208677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:43:56.107522 containerd[1445]: time="2026-03-14T00:43:56.107219471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:43:56.110742 containerd[1445]: time="2026-03-14T00:43:56.110543579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:43:56.132831 systemd[1]: Started cri-containerd-4dad0e6a6a6c0f56432346b75740e0d6eb1b91030ea24fbe862cb1e2c1ea2f65.scope - libcontainer container 4dad0e6a6a6c0f56432346b75740e0d6eb1b91030ea24fbe862cb1e2c1ea2f65. Mar 14 00:43:56.142255 systemd[1]: Started cri-containerd-7f1de74c9a9a06bcd4ea17b4aa759cd52b693bb1788bfee095336735a35e4be2.scope - libcontainer container 7f1de74c9a9a06bcd4ea17b4aa759cd52b693bb1788bfee095336735a35e4be2. Mar 14 00:43:56.150974 systemd[1]: Started cri-containerd-9595bb10b79ef6a7834e66299b3345dcec61a3b620bd277420100b04cb131241.scope - libcontainer container 9595bb10b79ef6a7834e66299b3345dcec61a3b620bd277420100b04cb131241. Mar 14 00:43:56.201143 containerd[1445]: time="2026-03-14T00:43:56.201021296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f1de74c9a9a06bcd4ea17b4aa759cd52b693bb1788bfee095336735a35e4be2\"" Mar 14 00:43:56.203044 kubelet[2128]: E0314 00:43:56.202935 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:56.205854 containerd[1445]: time="2026-03-14T00:43:56.205784319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3530d1c24c7f590338c10b7583d25372,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dad0e6a6a6c0f56432346b75740e0d6eb1b91030ea24fbe862cb1e2c1ea2f65\"" Mar 14 00:43:56.207727 kubelet[2128]: E0314 00:43:56.207605 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:56.211991 containerd[1445]: time="2026-03-14T00:43:56.211798111Z" level=info msg="CreateContainer within sandbox \"7f1de74c9a9a06bcd4ea17b4aa759cd52b693bb1788bfee095336735a35e4be2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:43:56.212233 containerd[1445]: time="2026-03-14T00:43:56.212205215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9595bb10b79ef6a7834e66299b3345dcec61a3b620bd277420100b04cb131241\"" Mar 14 00:43:56.215467 kubelet[2128]: E0314 00:43:56.215341 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:56.217728 containerd[1445]: time="2026-03-14T00:43:56.217665414Z" level=info msg="CreateContainer within sandbox \"4dad0e6a6a6c0f56432346b75740e0d6eb1b91030ea24fbe862cb1e2c1ea2f65\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:43:56.220909 containerd[1445]: time="2026-03-14T00:43:56.220764750Z" level=info msg="CreateContainer within sandbox \"9595bb10b79ef6a7834e66299b3345dcec61a3b620bd277420100b04cb131241\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:43:56.234224 containerd[1445]: time="2026-03-14T00:43:56.234063852Z" level=info msg="CreateContainer within sandbox \"7f1de74c9a9a06bcd4ea17b4aa759cd52b693bb1788bfee095336735a35e4be2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32211f0db62d55cf0cf33c39bcbe5a93b9a0bb715afebf398354e5543d44bf3c\"" Mar 14 00:43:56.235351 containerd[1445]: time="2026-03-14T00:43:56.235206863Z" level=info msg="StartContainer for \"32211f0db62d55cf0cf33c39bcbe5a93b9a0bb715afebf398354e5543d44bf3c\"" Mar 14 00:43:56.242892 containerd[1445]: time="2026-03-14T00:43:56.242435936Z" level=info msg="CreateContainer within sandbox \"4dad0e6a6a6c0f56432346b75740e0d6eb1b91030ea24fbe862cb1e2c1ea2f65\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db54c99a903fb58bbe5b9110a530e29f4df1cb8b16dade5c75611b06379318ac\"" Mar 14 00:43:56.243350 containerd[1445]: time="2026-03-14T00:43:56.243315501Z" level=info msg="StartContainer for \"db54c99a903fb58bbe5b9110a530e29f4df1cb8b16dade5c75611b06379318ac\"" Mar 14 00:43:56.246703 containerd[1445]: time="2026-03-14T00:43:56.246645949Z" level=info msg="CreateContainer within sandbox \"9595bb10b79ef6a7834e66299b3345dcec61a3b620bd277420100b04cb131241\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cca998f74016ba08041018448c2604a8db1b9e363bd9a511f3bb1082e32c72a7\"" Mar 14 00:43:56.247318 containerd[1445]: time="2026-03-14T00:43:56.247205536Z" level=info msg="StartContainer for \"cca998f74016ba08041018448c2604a8db1b9e363bd9a511f3bb1082e32c72a7\"" Mar 14 00:43:56.284839 systemd[1]: Started cri-containerd-32211f0db62d55cf0cf33c39bcbe5a93b9a0bb715afebf398354e5543d44bf3c.scope - libcontainer container 32211f0db62d55cf0cf33c39bcbe5a93b9a0bb715afebf398354e5543d44bf3c. Mar 14 00:43:56.295788 systemd[1]: Started cri-containerd-cca998f74016ba08041018448c2604a8db1b9e363bd9a511f3bb1082e32c72a7.scope - libcontainer container cca998f74016ba08041018448c2604a8db1b9e363bd9a511f3bb1082e32c72a7. Mar 14 00:43:56.302147 systemd[1]: Started cri-containerd-db54c99a903fb58bbe5b9110a530e29f4df1cb8b16dade5c75611b06379318ac.scope - libcontainer container db54c99a903fb58bbe5b9110a530e29f4df1cb8b16dade5c75611b06379318ac. Mar 14 00:43:56.365970 containerd[1445]: time="2026-03-14T00:43:56.364944182Z" level=info msg="StartContainer for \"cca998f74016ba08041018448c2604a8db1b9e363bd9a511f3bb1082e32c72a7\" returns successfully" Mar 14 00:43:56.373202 containerd[1445]: time="2026-03-14T00:43:56.372998963Z" level=info msg="StartContainer for \"32211f0db62d55cf0cf33c39bcbe5a93b9a0bb715afebf398354e5543d44bf3c\" returns successfully" Mar 14 00:43:56.381134 containerd[1445]: time="2026-03-14T00:43:56.381087945Z" level=info msg="StartContainer for \"db54c99a903fb58bbe5b9110a530e29f4df1cb8b16dade5c75611b06379318ac\" returns successfully" Mar 14 00:43:56.639964 kubelet[2128]: I0314 00:43:56.639567 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:43:57.122448 kubelet[2128]: E0314 00:43:57.122395 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:43:57.122847 kubelet[2128]: E0314 00:43:57.122608 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:57.124061 kubelet[2128]: E0314 00:43:57.124019 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:43:57.124172 kubelet[2128]: E0314 00:43:57.124133 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:57.129568 kubelet[2128]: E0314 00:43:57.129550 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:43:57.130177 kubelet[2128]: E0314 00:43:57.129713 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:57.722536 kubelet[2128]: E0314 00:43:57.722406 2128 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 14 00:43:57.786530 kubelet[2128]: I0314 00:43:57.786396 2128 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:43:57.874106 kubelet[2128]: I0314 00:43:57.873848 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:43:57.879335 kubelet[2128]: E0314 00:43:57.879268 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:43:57.879335 kubelet[2128]: I0314 00:43:57.879312 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:43:57.880964 kubelet[2128]: E0314 00:43:57.880921 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 14 00:43:57.880964 kubelet[2128]: I0314 00:43:57.880949 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:57.883054 kubelet[2128]: E0314 00:43:57.882966 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:58.058417 kubelet[2128]: I0314 00:43:58.058178 2128 apiserver.go:52] "Watching apiserver" Mar 14 00:43:58.073170 kubelet[2128]: I0314 00:43:58.073052 2128 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:43:58.128088 kubelet[2128]: I0314 00:43:58.127430 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:43:58.128088 kubelet[2128]: I0314 00:43:58.127803 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:58.130769 kubelet[2128]: E0314 00:43:58.130685 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 14 00:43:58.130908 kubelet[2128]: E0314 00:43:58.130871 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:58.131690 kubelet[2128]: E0314 00:43:58.131631 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:58.131939 kubelet[2128]: E0314 00:43:58.131867 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:59.129782 kubelet[2128]: I0314 00:43:59.129622 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:43:59.137425 kubelet[2128]: E0314 00:43:59.137246 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:00.047193 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-7.scope)... Mar 14 00:44:00.047230 systemd[1]: Reloading... Mar 14 00:44:00.122589 zram_generator::config[2465]: No configuration found. Mar 14 00:44:00.131905 kubelet[2128]: E0314 00:44:00.131647 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:00.221409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:44:00.308674 systemd[1]: Reloading finished in 260 ms. Mar 14 00:44:00.362115 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:44:00.380455 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:44:00.380891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:44:00.380967 systemd[1]: kubelet.service: Consumed 1.121s CPU time, 127.6M memory peak, 0B memory swap peak. Mar 14 00:44:00.394599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:44:00.582916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:44:00.591627 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:44:00.661923 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:44:00.661923 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:44:00.662373 kubelet[2504]: I0314 00:44:00.661971 2504 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:44:00.671082 kubelet[2504]: I0314 00:44:00.671008 2504 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:44:00.671082 kubelet[2504]: I0314 00:44:00.671065 2504 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:44:00.671178 kubelet[2504]: I0314 00:44:00.671114 2504 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:44:00.671178 kubelet[2504]: I0314 00:44:00.671134 2504 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:44:00.671651 kubelet[2504]: I0314 00:44:00.671546 2504 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:44:00.673080 kubelet[2504]: I0314 00:44:00.673014 2504 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:44:00.676950 kubelet[2504]: I0314 00:44:00.676792 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:44:00.684404 kubelet[2504]: E0314 00:44:00.684357 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:44:00.684468 kubelet[2504]: I0314 00:44:00.684425 2504 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:44:00.690081 kubelet[2504]: I0314 00:44:00.690015 2504 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:44:00.690356 kubelet[2504]: I0314 00:44:00.690283 2504 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:44:00.690507 kubelet[2504]: I0314 00:44:00.690326 2504 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:44:00.690655 kubelet[2504]: I0314 00:44:00.690543 2504 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:44:00.690655 kubelet[2504]: I0314 00:44:00.690552 2504 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:44:00.690655 kubelet[2504]: I0314 00:44:00.690580 2504 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:44:00.690808 kubelet[2504]: I0314 00:44:00.690767 2504 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:44:00.691019 kubelet[2504]: I0314 00:44:00.690983 2504 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:44:00.691019 kubelet[2504]: I0314 00:44:00.691013 2504 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:44:00.691176 kubelet[2504]: I0314 00:44:00.691033 2504 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:44:00.691176 kubelet[2504]: I0314 00:44:00.691049 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:44:00.694582 kubelet[2504]: I0314 00:44:00.694534 2504 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:44:00.697558 kubelet[2504]: I0314 00:44:00.695169 2504 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:44:00.697558 kubelet[2504]: I0314 00:44:00.695197 2504 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:44:00.702275 kubelet[2504]: I0314 00:44:00.702220 2504 server.go:1262] "Started kubelet" Mar 14 00:44:00.702592 kubelet[2504]: I0314 00:44:00.702448 2504 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:44:00.702592 kubelet[2504]: I0314 00:44:00.702564 2504 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:44:00.702946 kubelet[2504]: I0314 00:44:00.702831 2504 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:44:00.702946 kubelet[2504]: I0314 00:44:00.702925 2504 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:44:00.703436 kubelet[2504]: I0314 00:44:00.703390 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:44:00.704808 kubelet[2504]: I0314 00:44:00.704413 2504 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:44:00.707720 kubelet[2504]: E0314 00:44:00.707689 2504 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:44:00.710610 kubelet[2504]: I0314 00:44:00.710378 2504 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:44:00.711442 kubelet[2504]: I0314 00:44:00.711430 2504 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:44:00.711565 kubelet[2504]: I0314 00:44:00.710738 2504 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:44:00.712031 kubelet[2504]: I0314 00:44:00.712006 2504 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:44:00.714160 kubelet[2504]: I0314 00:44:00.714102 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:44:00.715331 kubelet[2504]: I0314 00:44:00.715271 2504 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:44:00.715331 kubelet[2504]: I0314 00:44:00.715307 2504 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:44:00.720412 kubelet[2504]: I0314 00:44:00.720345 2504 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:44:00.730426 kubelet[2504]: I0314 00:44:00.730297 2504 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:44:00.730702 kubelet[2504]: I0314 00:44:00.730443 2504 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:44:00.730702 kubelet[2504]: I0314 00:44:00.730700 2504 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:44:00.730765 kubelet[2504]: E0314 00:44:00.730742 2504 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:44:00.763915 kubelet[2504]: I0314 00:44:00.763779 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:44:00.763915 kubelet[2504]: I0314 00:44:00.763832 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:44:00.763915 kubelet[2504]: I0314 00:44:00.763850 2504 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:44:00.764110 kubelet[2504]: I0314 00:44:00.764018 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:44:00.764110 kubelet[2504]: I0314 00:44:00.764028 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:44:00.764110 kubelet[2504]: I0314 00:44:00.764045 2504 policy_none.go:49] "None policy: Start" Mar 14 00:44:00.764110 kubelet[2504]: I0314 00:44:00.764054 2504 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:44:00.764459 kubelet[2504]: I0314 00:44:00.764228 2504 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:44:00.764729 kubelet[2504]: I0314 00:44:00.764654 2504 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:44:00.764729 kubelet[2504]: I0314 00:44:00.764698 2504 policy_none.go:47] "Start" Mar 14 00:44:00.772130 kubelet[2504]: E0314 00:44:00.772089 2504 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:44:00.772284 kubelet[2504]: I0314 00:44:00.772252 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:44:00.772313 kubelet[2504]: I0314 00:44:00.772285 2504 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:44:00.772449 kubelet[2504]: I0314 00:44:00.772421 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:44:00.776128 kubelet[2504]: E0314 00:44:00.776060 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:44:00.831677 kubelet[2504]: I0314 00:44:00.831634 2504 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:44:00.831677 kubelet[2504]: I0314 00:44:00.831687 2504 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:44:00.831898 kubelet[2504]: I0314 00:44:00.831645 2504 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:44:00.839754 kubelet[2504]: E0314 00:44:00.839572 2504 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:44:00.880480 kubelet[2504]: I0314 00:44:00.880378 2504 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:44:00.889541 kubelet[2504]: I0314 00:44:00.889408 2504 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 14 00:44:00.889541 kubelet[2504]: I0314 00:44:00.889541 2504 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:44:00.912460 kubelet[2504]: I0314 00:44:00.912379 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3530d1c24c7f590338c10b7583d25372-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3530d1c24c7f590338c10b7583d25372\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:44:00.912460 kubelet[2504]: I0314 00:44:00.912432 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3530d1c24c7f590338c10b7583d25372-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3530d1c24c7f590338c10b7583d25372\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:44:00.912460 kubelet[2504]: I0314 00:44:00.912453 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3530d1c24c7f590338c10b7583d25372-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3530d1c24c7f590338c10b7583d25372\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:44:01.013598 kubelet[2504]: I0314 00:44:01.013426 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:44:01.013735 kubelet[2504]: I0314 00:44:01.013641 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:44:01.013735 kubelet[2504]: I0314 00:44:01.013668 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:44:01.013735 kubelet[2504]: I0314 00:44:01.013690 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:44:01.013803 kubelet[2504]: I0314 00:44:01.013735 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:44:01.013803 kubelet[2504]: I0314 00:44:01.013763 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:44:01.138437 kubelet[2504]: E0314 00:44:01.138341 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:01.139991 kubelet[2504]: E0314 00:44:01.139786 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:01.139991 kubelet[2504]: E0314 00:44:01.139890 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:01.691980 kubelet[2504]: I0314 00:44:01.691891 2504 apiserver.go:52] "Watching apiserver" Mar 14 00:44:01.712392 kubelet[2504]: I0314 00:44:01.712163 2504 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:44:01.747370 kubelet[2504]: I0314 00:44:01.747305 2504 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:44:01.747578 kubelet[2504]: E0314 00:44:01.747453 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:01.747764 kubelet[2504]: E0314 00:44:01.747722 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:01.761285 kubelet[2504]: E0314 00:44:01.760833 2504 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:44:01.761285 kubelet[2504]: E0314 00:44:01.761049 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:01.783728 kubelet[2504]: I0314 00:44:01.783596 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7835782199999999 podStartE2EDuration="1.78357822s" podCreationTimestamp="2026-03-14 00:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:44:01.772820671 +0000 UTC m=+1.174544091" watchObservedRunningTime="2026-03-14 00:44:01.78357822 +0000 UTC m=+1.185301629" Mar 14 00:44:01.835128 kubelet[2504]: I0314 00:44:01.835040 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8350196250000002 podStartE2EDuration="1.835019625s" podCreationTimestamp="2026-03-14 00:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:44:01.783740564 +0000 UTC m=+1.185463963" watchObservedRunningTime="2026-03-14 00:44:01.835019625 +0000 UTC m=+1.236743044" Mar 14 00:44:01.847626 kubelet[2504]: I0314 00:44:01.847563 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.847550773 podStartE2EDuration="2.847550773s" podCreationTimestamp="2026-03-14 00:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:44:01.835002557 +0000 UTC m=+1.236725976" watchObservedRunningTime="2026-03-14 00:44:01.847550773 +0000 UTC m=+1.249274172" Mar 14 00:44:02.748791 kubelet[2504]: E0314 00:44:02.748686 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:02.748791 kubelet[2504]: E0314 00:44:02.748741 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:02.748791 kubelet[2504]: E0314 00:44:02.748749 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:03.751396 kubelet[2504]: E0314 00:44:03.751343 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:06.360054 kubelet[2504]: I0314 00:44:06.359946 2504 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:44:06.360700 kubelet[2504]: I0314 00:44:06.360661 2504 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:44:06.360734 containerd[1445]: time="2026-03-14T00:44:06.360364861Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:44:07.141339 systemd[1]: Created slice kubepods-besteffort-pod2116c128_c69b_4f9e_9bbb_9a33744ecdd7.slice - libcontainer container kubepods-besteffort-pod2116c128_c69b_4f9e_9bbb_9a33744ecdd7.slice. Mar 14 00:44:07.158262 kubelet[2504]: I0314 00:44:07.158104 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2116c128-c69b-4f9e-9bbb-9a33744ecdd7-kube-proxy\") pod \"kube-proxy-slzth\" (UID: \"2116c128-c69b-4f9e-9bbb-9a33744ecdd7\") " pod="kube-system/kube-proxy-slzth" Mar 14 00:44:07.158262 kubelet[2504]: I0314 00:44:07.158175 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2116c128-c69b-4f9e-9bbb-9a33744ecdd7-lib-modules\") pod \"kube-proxy-slzth\" (UID: \"2116c128-c69b-4f9e-9bbb-9a33744ecdd7\") " pod="kube-system/kube-proxy-slzth" Mar 14 00:44:07.158262 kubelet[2504]: I0314 00:44:07.158206 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2116c128-c69b-4f9e-9bbb-9a33744ecdd7-xtables-lock\") pod \"kube-proxy-slzth\" (UID: \"2116c128-c69b-4f9e-9bbb-9a33744ecdd7\") " pod="kube-system/kube-proxy-slzth" Mar 14 00:44:07.158262 kubelet[2504]: I0314 00:44:07.158229 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqbbs\" (UniqueName: \"kubernetes.io/projected/2116c128-c69b-4f9e-9bbb-9a33744ecdd7-kube-api-access-wqbbs\") pod \"kube-proxy-slzth\" (UID: \"2116c128-c69b-4f9e-9bbb-9a33744ecdd7\") " pod="kube-system/kube-proxy-slzth" Mar 14 00:44:07.265214 kubelet[2504]: E0314 00:44:07.265168 2504 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 14 00:44:07.265214 kubelet[2504]: E0314 00:44:07.265211 2504 projected.go:196] Error preparing data for projected volume kube-api-access-wqbbs for pod kube-system/kube-proxy-slzth: configmap "kube-root-ca.crt" not found Mar 14 00:44:07.265361 kubelet[2504]: E0314 00:44:07.265262 2504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2116c128-c69b-4f9e-9bbb-9a33744ecdd7-kube-api-access-wqbbs podName:2116c128-c69b-4f9e-9bbb-9a33744ecdd7 nodeName:}" failed. No retries permitted until 2026-03-14 00:44:07.765245611 +0000 UTC m=+7.166969009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wqbbs" (UniqueName: "kubernetes.io/projected/2116c128-c69b-4f9e-9bbb-9a33744ecdd7-kube-api-access-wqbbs") pod "kube-proxy-slzth" (UID: "2116c128-c69b-4f9e-9bbb-9a33744ecdd7") : configmap "kube-root-ca.crt" not found Mar 14 00:44:07.514218 systemd[1]: Created slice kubepods-besteffort-pode6433fac_e829_44f1_b1a6_bde9a1a1e548.slice - libcontainer container kubepods-besteffort-pode6433fac_e829_44f1_b1a6_bde9a1a1e548.slice. Mar 14 00:44:07.562683 kubelet[2504]: I0314 00:44:07.562630 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6433fac-e829-44f1-b1a6-bde9a1a1e548-var-lib-calico\") pod \"tigera-operator-5588576f44-tpvx9\" (UID: \"e6433fac-e829-44f1-b1a6-bde9a1a1e548\") " pod="tigera-operator/tigera-operator-5588576f44-tpvx9" Mar 14 00:44:07.562683 kubelet[2504]: I0314 00:44:07.562684 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs8sw\" (UniqueName: \"kubernetes.io/projected/e6433fac-e829-44f1-b1a6-bde9a1a1e548-kube-api-access-vs8sw\") pod \"tigera-operator-5588576f44-tpvx9\" (UID: \"e6433fac-e829-44f1-b1a6-bde9a1a1e548\") " pod="tigera-operator/tigera-operator-5588576f44-tpvx9" Mar 14 00:44:07.822396 containerd[1445]: time="2026-03-14T00:44:07.822259660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-tpvx9,Uid:e6433fac-e829-44f1-b1a6-bde9a1a1e548,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:44:07.860074 containerd[1445]: time="2026-03-14T00:44:07.859640920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:07.860074 containerd[1445]: time="2026-03-14T00:44:07.859718071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:07.860074 containerd[1445]: time="2026-03-14T00:44:07.859733903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:07.860074 containerd[1445]: time="2026-03-14T00:44:07.859939506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:07.894764 systemd[1]: Started cri-containerd-c5bb6304013a2fc6bdc135d446b9a03e0567f1541e5ba3ff1b5f38f4a44c95cb.scope - libcontainer container c5bb6304013a2fc6bdc135d446b9a03e0567f1541e5ba3ff1b5f38f4a44c95cb. Mar 14 00:44:07.947393 containerd[1445]: time="2026-03-14T00:44:07.947324533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-tpvx9,Uid:e6433fac-e829-44f1-b1a6-bde9a1a1e548,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c5bb6304013a2fc6bdc135d446b9a03e0567f1541e5ba3ff1b5f38f4a44c95cb\"" Mar 14 00:44:07.949479 containerd[1445]: time="2026-03-14T00:44:07.949402778Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:44:08.059901 kubelet[2504]: E0314 00:44:08.059855 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:08.060787 containerd[1445]: time="2026-03-14T00:44:08.060356570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-slzth,Uid:2116c128-c69b-4f9e-9bbb-9a33744ecdd7,Namespace:kube-system,Attempt:0,}" Mar 14 00:44:08.099476 containerd[1445]: time="2026-03-14T00:44:08.098992561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:08.099476 containerd[1445]: time="2026-03-14T00:44:08.099122835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:08.099476 containerd[1445]: time="2026-03-14T00:44:08.099203673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:08.101179 containerd[1445]: time="2026-03-14T00:44:08.101074648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:08.131669 systemd[1]: Started cri-containerd-d063cd8783f484216ab7d2672d0e21f0ed770c2d875f1e04b1dcb1b3bdfddc55.scope - libcontainer container d063cd8783f484216ab7d2672d0e21f0ed770c2d875f1e04b1dcb1b3bdfddc55. Mar 14 00:44:08.162189 containerd[1445]: time="2026-03-14T00:44:08.162036876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-slzth,Uid:2116c128-c69b-4f9e-9bbb-9a33744ecdd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d063cd8783f484216ab7d2672d0e21f0ed770c2d875f1e04b1dcb1b3bdfddc55\"" Mar 14 00:44:08.162900 kubelet[2504]: E0314 00:44:08.162845 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:08.169359 containerd[1445]: time="2026-03-14T00:44:08.169247855Z" level=info msg="CreateContainer within sandbox \"d063cd8783f484216ab7d2672d0e21f0ed770c2d875f1e04b1dcb1b3bdfddc55\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:44:08.187885 containerd[1445]: time="2026-03-14T00:44:08.187760200Z" level=info msg="CreateContainer within sandbox \"d063cd8783f484216ab7d2672d0e21f0ed770c2d875f1e04b1dcb1b3bdfddc55\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d64a38faf85f40b40a2f0b5d82bc5d17186295084a625cf988bc843e83068436\"" Mar 14 00:44:08.188914 containerd[1445]: time="2026-03-14T00:44:08.188838807Z" level=info msg="StartContainer for \"d64a38faf85f40b40a2f0b5d82bc5d17186295084a625cf988bc843e83068436\"" Mar 14 00:44:08.226732 systemd[1]: Started cri-containerd-d64a38faf85f40b40a2f0b5d82bc5d17186295084a625cf988bc843e83068436.scope - libcontainer container d64a38faf85f40b40a2f0b5d82bc5d17186295084a625cf988bc843e83068436. Mar 14 00:44:08.266881 containerd[1445]: time="2026-03-14T00:44:08.266833379Z" level=info msg="StartContainer for \"d64a38faf85f40b40a2f0b5d82bc5d17186295084a625cf988bc843e83068436\" returns successfully" Mar 14 00:44:08.676376 kubelet[2504]: E0314 00:44:08.676275 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:08.740126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999198892.mount: Deactivated successfully. Mar 14 00:44:08.763210 kubelet[2504]: E0314 00:44:08.763116 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:08.766457 kubelet[2504]: E0314 00:44:08.765897 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:08.787849 kubelet[2504]: I0314 00:44:08.787782 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-slzth" podStartSLOduration=1.7877676949999999 podStartE2EDuration="1.787767695s" podCreationTimestamp="2026-03-14 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:44:08.776647402 +0000 UTC m=+8.178370801" watchObservedRunningTime="2026-03-14 00:44:08.787767695 +0000 UTC m=+8.189491095" Mar 14 00:44:09.767769 kubelet[2504]: E0314 00:44:09.767707 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:11.456590 containerd[1445]: time="2026-03-14T00:44:11.456104016Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:11.457330 containerd[1445]: time="2026-03-14T00:44:11.457255737Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:44:11.460427 containerd[1445]: time="2026-03-14T00:44:11.460345531Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:11.464106 containerd[1445]: time="2026-03-14T00:44:11.464020538Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:11.465435 containerd[1445]: time="2026-03-14T00:44:11.465325717Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.515482767s" Mar 14 00:44:11.465435 containerd[1445]: time="2026-03-14T00:44:11.465411233Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:44:11.471132 containerd[1445]: time="2026-03-14T00:44:11.471049887Z" level=info msg="CreateContainer within sandbox \"c5bb6304013a2fc6bdc135d446b9a03e0567f1541e5ba3ff1b5f38f4a44c95cb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:44:11.485571 containerd[1445]: time="2026-03-14T00:44:11.485368963Z" level=info msg="CreateContainer within sandbox \"c5bb6304013a2fc6bdc135d446b9a03e0567f1541e5ba3ff1b5f38f4a44c95cb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3aaeba167760e013c0699442a33844908e68e3106a13f06e711294920c6f3a13\"" Mar 14 00:44:11.486258 containerd[1445]: time="2026-03-14T00:44:11.486181468Z" level=info msg="StartContainer for \"3aaeba167760e013c0699442a33844908e68e3106a13f06e711294920c6f3a13\"" Mar 14 00:44:11.524739 systemd[1]: run-containerd-runc-k8s.io-3aaeba167760e013c0699442a33844908e68e3106a13f06e711294920c6f3a13-runc.Qgxy6l.mount: Deactivated successfully. Mar 14 00:44:11.532732 systemd[1]: Started cri-containerd-3aaeba167760e013c0699442a33844908e68e3106a13f06e711294920c6f3a13.scope - libcontainer container 3aaeba167760e013c0699442a33844908e68e3106a13f06e711294920c6f3a13. Mar 14 00:44:11.573448 containerd[1445]: time="2026-03-14T00:44:11.573308346Z" level=info msg="StartContainer for \"3aaeba167760e013c0699442a33844908e68e3106a13f06e711294920c6f3a13\" returns successfully" Mar 14 00:44:11.787563 kubelet[2504]: I0314 00:44:11.787142 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-tpvx9" podStartSLOduration=1.26963007 podStartE2EDuration="4.787122902s" podCreationTimestamp="2026-03-14 00:44:07 +0000 UTC" firstStartedPulling="2026-03-14 00:44:07.949082404 +0000 UTC m=+7.350805803" lastFinishedPulling="2026-03-14 00:44:11.466575236 +0000 UTC m=+10.868298635" observedRunningTime="2026-03-14 00:44:11.786388592 +0000 UTC m=+11.188111992" watchObservedRunningTime="2026-03-14 00:44:11.787122902 +0000 UTC m=+11.188846311" Mar 14 00:44:12.701008 kubelet[2504]: E0314 00:44:12.700807 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:13.582797 kubelet[2504]: E0314 00:44:13.581443 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:13.778935 kubelet[2504]: E0314 00:44:13.778569 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:16.390722 update_engine[1437]: I20260314 00:44:16.390587 1437 update_attempter.cc:509] Updating boot flags... Mar 14 00:44:16.432638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2896) Mar 14 00:44:16.478824 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2895) Mar 14 00:44:16.520203 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2895) Mar 14 00:44:17.107062 sudo[1632]: pam_unix(sudo:session): session closed for user root Mar 14 00:44:17.113725 sshd[1628]: pam_unix(sshd:session): session closed for user core Mar 14 00:44:17.120096 systemd[1]: sshd@6-10.0.0.158:22-10.0.0.1:35322.service: Deactivated successfully. Mar 14 00:44:17.123951 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:44:17.124429 systemd[1]: session-7.scope: Consumed 5.399s CPU time, 159.5M memory peak, 0B memory swap peak. Mar 14 00:44:17.125984 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:44:17.130241 systemd-logind[1436]: Removed session 7. Mar 14 00:44:19.618694 systemd[1]: Created slice kubepods-besteffort-pod560cbdfc_d8bd_4490_a942_9892cb60efde.slice - libcontainer container kubepods-besteffort-pod560cbdfc_d8bd_4490_a942_9892cb60efde.slice. Mar 14 00:44:19.646691 systemd[1]: Created slice kubepods-besteffort-pod34a9d2cf_7ab1_49fd_aaac_15859400ae99.slice - libcontainer container kubepods-besteffort-pod34a9d2cf_7ab1_49fd_aaac_15859400ae99.slice. Mar 14 00:44:19.652084 kubelet[2504]: I0314 00:44:19.651702 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-cni-bin-dir\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652084 kubelet[2504]: I0314 00:44:19.651780 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-lib-modules\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652084 kubelet[2504]: I0314 00:44:19.651798 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-nodeproc\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652084 kubelet[2504]: I0314 00:44:19.651814 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-var-lib-calico\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652084 kubelet[2504]: I0314 00:44:19.651827 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-xtables-lock\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652650 kubelet[2504]: I0314 00:44:19.651841 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-var-run-calico\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652650 kubelet[2504]: I0314 00:44:19.651856 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqxw5\" (UniqueName: \"kubernetes.io/projected/560cbdfc-d8bd-4490-a942-9892cb60efde-kube-api-access-qqxw5\") pod \"calico-typha-56bdd6bbc5-6jlsw\" (UID: \"560cbdfc-d8bd-4490-a942-9892cb60efde\") " pod="calico-system/calico-typha-56bdd6bbc5-6jlsw" Mar 14 00:44:19.652650 kubelet[2504]: I0314 00:44:19.651868 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-cni-log-dir\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652650 kubelet[2504]: I0314 00:44:19.651882 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-cni-net-dir\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652650 kubelet[2504]: I0314 00:44:19.651894 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34a9d2cf-7ab1-49fd-aaac-15859400ae99-node-certs\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652828 kubelet[2504]: I0314 00:44:19.651909 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-flexvol-driver-host\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652828 kubelet[2504]: I0314 00:44:19.651923 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/560cbdfc-d8bd-4490-a942-9892cb60efde-tigera-ca-bundle\") pod \"calico-typha-56bdd6bbc5-6jlsw\" (UID: \"560cbdfc-d8bd-4490-a942-9892cb60efde\") " pod="calico-system/calico-typha-56bdd6bbc5-6jlsw" Mar 14 00:44:19.652828 kubelet[2504]: I0314 00:44:19.651936 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlw9c\" (UniqueName: \"kubernetes.io/projected/34a9d2cf-7ab1-49fd-aaac-15859400ae99-kube-api-access-rlw9c\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652828 kubelet[2504]: I0314 00:44:19.651949 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/560cbdfc-d8bd-4490-a942-9892cb60efde-typha-certs\") pod \"calico-typha-56bdd6bbc5-6jlsw\" (UID: \"560cbdfc-d8bd-4490-a942-9892cb60efde\") " pod="calico-system/calico-typha-56bdd6bbc5-6jlsw" Mar 14 00:44:19.652828 kubelet[2504]: I0314 00:44:19.651960 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-bpffs\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652937 kubelet[2504]: I0314 00:44:19.651973 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-policysync\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652937 kubelet[2504]: I0314 00:44:19.651986 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/34a9d2cf-7ab1-49fd-aaac-15859400ae99-sys-fs\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.652937 kubelet[2504]: I0314 00:44:19.651998 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a9d2cf-7ab1-49fd-aaac-15859400ae99-tigera-ca-bundle\") pod \"calico-node-8v7c6\" (UID: \"34a9d2cf-7ab1-49fd-aaac-15859400ae99\") " pod="calico-system/calico-node-8v7c6" Mar 14 00:44:19.729333 kubelet[2504]: E0314 00:44:19.729260 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96thm" podUID="33ff238e-5cbd-4d42-b80c-67e32b8fb49d" Mar 14 00:44:19.753466 kubelet[2504]: I0314 00:44:19.753382 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33ff238e-5cbd-4d42-b80c-67e32b8fb49d-kubelet-dir\") pod \"csi-node-driver-96thm\" (UID: \"33ff238e-5cbd-4d42-b80c-67e32b8fb49d\") " pod="calico-system/csi-node-driver-96thm" Mar 14 00:44:19.753622 kubelet[2504]: I0314 00:44:19.753434 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/33ff238e-5cbd-4d42-b80c-67e32b8fb49d-registration-dir\") pod \"csi-node-driver-96thm\" (UID: \"33ff238e-5cbd-4d42-b80c-67e32b8fb49d\") " pod="calico-system/csi-node-driver-96thm" Mar 14 00:44:19.753622 kubelet[2504]: I0314 00:44:19.753580 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/33ff238e-5cbd-4d42-b80c-67e32b8fb49d-varrun\") pod \"csi-node-driver-96thm\" (UID: \"33ff238e-5cbd-4d42-b80c-67e32b8fb49d\") " pod="calico-system/csi-node-driver-96thm" Mar 14 00:44:19.755541 kubelet[2504]: I0314 00:44:19.753673 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnqcs\" (UniqueName: \"kubernetes.io/projected/33ff238e-5cbd-4d42-b80c-67e32b8fb49d-kube-api-access-qnqcs\") pod \"csi-node-driver-96thm\" (UID: \"33ff238e-5cbd-4d42-b80c-67e32b8fb49d\") " pod="calico-system/csi-node-driver-96thm" Mar 14 00:44:19.755541 kubelet[2504]: I0314 00:44:19.753730 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/33ff238e-5cbd-4d42-b80c-67e32b8fb49d-socket-dir\") pod \"csi-node-driver-96thm\" (UID: \"33ff238e-5cbd-4d42-b80c-67e32b8fb49d\") " pod="calico-system/csi-node-driver-96thm" Mar 14 00:44:19.757111 kubelet[2504]: E0314 00:44:19.756880 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.757111 kubelet[2504]: W0314 00:44:19.756998 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.757563 kubelet[2504]: E0314 00:44:19.757410 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.765971 kubelet[2504]: E0314 00:44:19.765678 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.765971 kubelet[2504]: W0314 00:44:19.765699 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.765971 kubelet[2504]: E0314 00:44:19.765720 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.769440 kubelet[2504]: E0314 00:44:19.769325 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.770141 kubelet[2504]: W0314 00:44:19.769604 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.770564 kubelet[2504]: E0314 00:44:19.770544 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.773973 kubelet[2504]: E0314 00:44:19.773874 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.774443 kubelet[2504]: W0314 00:44:19.774425 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.774879 kubelet[2504]: E0314 00:44:19.774661 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.778574 kubelet[2504]: E0314 00:44:19.778556 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.778938 kubelet[2504]: W0314 00:44:19.778914 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.780872 kubelet[2504]: E0314 00:44:19.780596 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.781220 kubelet[2504]: E0314 00:44:19.781101 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.781220 kubelet[2504]: W0314 00:44:19.781154 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.781220 kubelet[2504]: E0314 00:44:19.781169 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.781656 kubelet[2504]: E0314 00:44:19.781553 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.781656 kubelet[2504]: W0314 00:44:19.781568 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.781656 kubelet[2504]: E0314 00:44:19.781578 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.782088 kubelet[2504]: E0314 00:44:19.782063 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.782088 kubelet[2504]: W0314 00:44:19.782073 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.782088 kubelet[2504]: E0314 00:44:19.782082 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.785135 kubelet[2504]: E0314 00:44:19.782602 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.785135 kubelet[2504]: W0314 00:44:19.782616 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.785135 kubelet[2504]: E0314 00:44:19.782625 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.785618 kubelet[2504]: E0314 00:44:19.785514 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.785618 kubelet[2504]: W0314 00:44:19.785551 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.785618 kubelet[2504]: E0314 00:44:19.785563 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.786074 kubelet[2504]: E0314 00:44:19.786036 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.786074 kubelet[2504]: W0314 00:44:19.786066 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.786074 kubelet[2504]: E0314 00:44:19.786077 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.795256 kubelet[2504]: E0314 00:44:19.795125 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.795815 kubelet[2504]: W0314 00:44:19.795797 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.796104 kubelet[2504]: E0314 00:44:19.796023 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.855865 kubelet[2504]: E0314 00:44:19.855804 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.855865 kubelet[2504]: W0314 00:44:19.855842 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.855865 kubelet[2504]: E0314 00:44:19.855862 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.856384 kubelet[2504]: E0314 00:44:19.856313 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.856384 kubelet[2504]: W0314 00:44:19.856351 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.856384 kubelet[2504]: E0314 00:44:19.856374 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.856996 kubelet[2504]: E0314 00:44:19.856956 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.856996 kubelet[2504]: W0314 00:44:19.856980 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.856996 kubelet[2504]: E0314 00:44:19.856991 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.857604 kubelet[2504]: E0314 00:44:19.857565 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.857604 kubelet[2504]: W0314 00:44:19.857590 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.857604 kubelet[2504]: E0314 00:44:19.857599 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.857991 kubelet[2504]: E0314 00:44:19.857962 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.857991 kubelet[2504]: W0314 00:44:19.857985 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.858047 kubelet[2504]: E0314 00:44:19.857994 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.858380 kubelet[2504]: E0314 00:44:19.858354 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.858380 kubelet[2504]: W0314 00:44:19.858378 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.858438 kubelet[2504]: E0314 00:44:19.858387 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.858819 kubelet[2504]: E0314 00:44:19.858774 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.858819 kubelet[2504]: W0314 00:44:19.858802 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.858819 kubelet[2504]: E0314 00:44:19.858813 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.859219 kubelet[2504]: E0314 00:44:19.859174 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.859219 kubelet[2504]: W0314 00:44:19.859201 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.859219 kubelet[2504]: E0314 00:44:19.859210 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.859650 kubelet[2504]: E0314 00:44:19.859624 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.859650 kubelet[2504]: W0314 00:44:19.859646 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.859697 kubelet[2504]: E0314 00:44:19.859655 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.860054 kubelet[2504]: E0314 00:44:19.860011 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.860054 kubelet[2504]: W0314 00:44:19.860037 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.860054 kubelet[2504]: E0314 00:44:19.860046 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.860420 kubelet[2504]: E0314 00:44:19.860377 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.860420 kubelet[2504]: W0314 00:44:19.860403 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.860420 kubelet[2504]: E0314 00:44:19.860412 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.860858 kubelet[2504]: E0314 00:44:19.860817 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.860858 kubelet[2504]: W0314 00:44:19.860843 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.860858 kubelet[2504]: E0314 00:44:19.860852 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.861208 kubelet[2504]: E0314 00:44:19.861165 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.861208 kubelet[2504]: W0314 00:44:19.861191 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.861208 kubelet[2504]: E0314 00:44:19.861200 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.861611 kubelet[2504]: E0314 00:44:19.861571 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.861611 kubelet[2504]: W0314 00:44:19.861596 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.861611 kubelet[2504]: E0314 00:44:19.861604 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.861981 kubelet[2504]: E0314 00:44:19.861938 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.861981 kubelet[2504]: W0314 00:44:19.861964 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.861981 kubelet[2504]: E0314 00:44:19.861972 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.862437 kubelet[2504]: E0314 00:44:19.862394 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.862437 kubelet[2504]: W0314 00:44:19.862420 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.862437 kubelet[2504]: E0314 00:44:19.862428 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.862942 kubelet[2504]: E0314 00:44:19.862892 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.862942 kubelet[2504]: W0314 00:44:19.862915 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.862942 kubelet[2504]: E0314 00:44:19.862924 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.863256 kubelet[2504]: E0314 00:44:19.863214 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.863256 kubelet[2504]: W0314 00:44:19.863237 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.863256 kubelet[2504]: E0314 00:44:19.863245 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.863672 kubelet[2504]: E0314 00:44:19.863634 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.863672 kubelet[2504]: W0314 00:44:19.863662 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.863831 kubelet[2504]: E0314 00:44:19.863680 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.864107 kubelet[2504]: E0314 00:44:19.864041 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.864107 kubelet[2504]: W0314 00:44:19.864070 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.864107 kubelet[2504]: E0314 00:44:19.864083 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.864701 kubelet[2504]: E0314 00:44:19.864474 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.864701 kubelet[2504]: W0314 00:44:19.864553 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.864701 kubelet[2504]: E0314 00:44:19.864564 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.866640 kubelet[2504]: E0314 00:44:19.865142 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.866640 kubelet[2504]: W0314 00:44:19.865155 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.866640 kubelet[2504]: E0314 00:44:19.865165 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.866640 kubelet[2504]: E0314 00:44:19.865619 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.866640 kubelet[2504]: W0314 00:44:19.865629 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.866640 kubelet[2504]: E0314 00:44:19.865638 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.866640 kubelet[2504]: E0314 00:44:19.865934 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.866640 kubelet[2504]: W0314 00:44:19.865942 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.866640 kubelet[2504]: E0314 00:44:19.865951 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.866640 kubelet[2504]: E0314 00:44:19.866315 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.867046 kubelet[2504]: W0314 00:44:19.866323 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.867046 kubelet[2504]: E0314 00:44:19.866332 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.877603 kubelet[2504]: E0314 00:44:19.877103 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:44:19.877603 kubelet[2504]: W0314 00:44:19.877149 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:44:19.877603 kubelet[2504]: E0314 00:44:19.877169 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:44:19.932935 kubelet[2504]: E0314 00:44:19.932876 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:19.933989 containerd[1445]: time="2026-03-14T00:44:19.933560624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56bdd6bbc5-6jlsw,Uid:560cbdfc-d8bd-4490-a942-9892cb60efde,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:19.953863 containerd[1445]: time="2026-03-14T00:44:19.953798079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8v7c6,Uid:34a9d2cf-7ab1-49fd-aaac-15859400ae99,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:19.975310 containerd[1445]: time="2026-03-14T00:44:19.974358513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:19.975310 containerd[1445]: time="2026-03-14T00:44:19.974445631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:19.975310 containerd[1445]: time="2026-03-14T00:44:19.974466691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:19.975310 containerd[1445]: time="2026-03-14T00:44:19.974694347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:19.993900 containerd[1445]: time="2026-03-14T00:44:19.993476105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:19.993900 containerd[1445]: time="2026-03-14T00:44:19.993588290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:19.993900 containerd[1445]: time="2026-03-14T00:44:19.993604962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:19.993900 containerd[1445]: time="2026-03-14T00:44:19.993715013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:20.003996 systemd[1]: Started cri-containerd-a7905e4dd2598600ea4fb6f6f1d930097cd6f0c2509f0fe3160506689a36c76d.scope - libcontainer container a7905e4dd2598600ea4fb6f6f1d930097cd6f0c2509f0fe3160506689a36c76d. Mar 14 00:44:20.023126 systemd[1]: Started cri-containerd-e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f.scope - libcontainer container e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f. Mar 14 00:44:20.069621 containerd[1445]: time="2026-03-14T00:44:20.068815233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8v7c6,Uid:34a9d2cf-7ab1-49fd-aaac-15859400ae99,Namespace:calico-system,Attempt:0,} returns sandbox id \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\"" Mar 14 00:44:20.070872 containerd[1445]: time="2026-03-14T00:44:20.070716331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:44:20.078368 containerd[1445]: time="2026-03-14T00:44:20.078342688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56bdd6bbc5-6jlsw,Uid:560cbdfc-d8bd-4490-a942-9892cb60efde,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7905e4dd2598600ea4fb6f6f1d930097cd6f0c2509f0fe3160506689a36c76d\"" Mar 14 00:44:20.079366 kubelet[2504]: E0314 00:44:20.079310 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:20.618571 containerd[1445]: time="2026-03-14T00:44:20.618469963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:20.619408 containerd[1445]: time="2026-03-14T00:44:20.619332505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 14 00:44:20.621145 containerd[1445]: time="2026-03-14T00:44:20.621060943Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:20.623992 containerd[1445]: time="2026-03-14T00:44:20.623695985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:20.624893 containerd[1445]: time="2026-03-14T00:44:20.624838568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 554.034769ms" Mar 14 00:44:20.624893 containerd[1445]: time="2026-03-14T00:44:20.624887452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:44:20.626199 containerd[1445]: time="2026-03-14T00:44:20.626157425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:44:20.630563 containerd[1445]: time="2026-03-14T00:44:20.630392748Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:44:20.651216 containerd[1445]: time="2026-03-14T00:44:20.651117050Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521\"" Mar 14 00:44:20.652029 containerd[1445]: time="2026-03-14T00:44:20.651977456Z" level=info msg="StartContainer for \"147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521\"" Mar 14 00:44:20.699584 systemd[1]: Started cri-containerd-147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521.scope - libcontainer container 147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521. Mar 14 00:44:20.741084 containerd[1445]: time="2026-03-14T00:44:20.740971484Z" level=info msg="StartContainer for \"147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521\" returns successfully" Mar 14 00:44:20.759022 systemd[1]: cri-containerd-147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521.scope: Deactivated successfully. Mar 14 00:44:20.791700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521-rootfs.mount: Deactivated successfully. Mar 14 00:44:20.826705 containerd[1445]: time="2026-03-14T00:44:20.826611838Z" level=info msg="shim disconnected" id=147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521 namespace=k8s.io Mar 14 00:44:20.826705 containerd[1445]: time="2026-03-14T00:44:20.826686821Z" level=warning msg="cleaning up after shim disconnected" id=147bb8790e596a6dce457c12394ba8a5e5cc4713abeaa7244f7c192785b47521 namespace=k8s.io Mar 14 00:44:20.826705 containerd[1445]: time="2026-03-14T00:44:20.826696731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:44:21.732166 kubelet[2504]: E0314 00:44:21.732054 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96thm" podUID="33ff238e-5cbd-4d42-b80c-67e32b8fb49d" Mar 14 00:44:22.572297 containerd[1445]: time="2026-03-14T00:44:22.572217043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:22.574764 containerd[1445]: time="2026-03-14T00:44:22.574653705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 14 00:44:22.576862 containerd[1445]: time="2026-03-14T00:44:22.576801556Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:22.580650 containerd[1445]: time="2026-03-14T00:44:22.580568439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:22.581266 containerd[1445]: time="2026-03-14T00:44:22.581218369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.955014576s" Mar 14 00:44:22.581266 containerd[1445]: time="2026-03-14T00:44:22.581262703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:44:22.584311 containerd[1445]: time="2026-03-14T00:44:22.584039913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:44:22.600619 containerd[1445]: time="2026-03-14T00:44:22.600581790Z" level=info msg="CreateContainer within sandbox \"a7905e4dd2598600ea4fb6f6f1d930097cd6f0c2509f0fe3160506689a36c76d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:44:22.625171 containerd[1445]: time="2026-03-14T00:44:22.625078393Z" level=info msg="CreateContainer within sandbox \"a7905e4dd2598600ea4fb6f6f1d930097cd6f0c2509f0fe3160506689a36c76d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8885091451c9363a7dc29118aea6d903db1db117a507ab8fa5fe0ad43119f52d\"" Mar 14 00:44:22.625811 containerd[1445]: time="2026-03-14T00:44:22.625755576Z" level=info msg="StartContainer for \"8885091451c9363a7dc29118aea6d903db1db117a507ab8fa5fe0ad43119f52d\"" Mar 14 00:44:22.660687 systemd[1]: Started cri-containerd-8885091451c9363a7dc29118aea6d903db1db117a507ab8fa5fe0ad43119f52d.scope - libcontainer container 8885091451c9363a7dc29118aea6d903db1db117a507ab8fa5fe0ad43119f52d. Mar 14 00:44:22.709935 containerd[1445]: time="2026-03-14T00:44:22.709830184Z" level=info msg="StartContainer for \"8885091451c9363a7dc29118aea6d903db1db117a507ab8fa5fe0ad43119f52d\" returns successfully" Mar 14 00:44:22.816665 kubelet[2504]: E0314 00:44:22.816313 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:23.733536 kubelet[2504]: E0314 00:44:23.732056 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96thm" podUID="33ff238e-5cbd-4d42-b80c-67e32b8fb49d" Mar 14 00:44:23.823433 kubelet[2504]: I0314 00:44:23.823396 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:44:23.824555 kubelet[2504]: E0314 00:44:23.824398 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:25.733559 kubelet[2504]: E0314 00:44:25.731833 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96thm" podUID="33ff238e-5cbd-4d42-b80c-67e32b8fb49d" Mar 14 00:44:26.672778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360380817.mount: Deactivated successfully. Mar 14 00:44:26.943033 containerd[1445]: time="2026-03-14T00:44:26.942854786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:44:26.948955 containerd[1445]: time="2026-03-14T00:44:26.948851344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.364768079s" Mar 14 00:44:26.948955 containerd[1445]: time="2026-03-14T00:44:26.948899245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:44:26.952185 containerd[1445]: time="2026-03-14T00:44:26.952130332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:26.953745 containerd[1445]: time="2026-03-14T00:44:26.953703495Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:26.954369 containerd[1445]: time="2026-03-14T00:44:26.954233867Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:44:26.954776 containerd[1445]: time="2026-03-14T00:44:26.954736691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:26.977407 containerd[1445]: time="2026-03-14T00:44:26.977313339Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe\"" Mar 14 00:44:26.978150 containerd[1445]: time="2026-03-14T00:44:26.978107410Z" level=info msg="StartContainer for \"d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe\"" Mar 14 00:44:27.041955 systemd[1]: Started cri-containerd-d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe.scope - libcontainer container d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe. Mar 14 00:44:27.074903 containerd[1445]: time="2026-03-14T00:44:27.074760461Z" level=info msg="StartContainer for \"d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe\" returns successfully" Mar 14 00:44:27.119703 systemd[1]: cri-containerd-d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe.scope: Deactivated successfully. Mar 14 00:44:27.311709 containerd[1445]: time="2026-03-14T00:44:27.311409938Z" level=info msg="shim disconnected" id=d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe namespace=k8s.io Mar 14 00:44:27.311709 containerd[1445]: time="2026-03-14T00:44:27.311574219Z" level=warning msg="cleaning up after shim disconnected" id=d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe namespace=k8s.io Mar 14 00:44:27.311709 containerd[1445]: time="2026-03-14T00:44:27.311590922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:44:27.673375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1f3fb472ea7af83fe9047f89e12ec6020cf34bda23862588dcabf7b4b78bafe-rootfs.mount: Deactivated successfully. Mar 14 00:44:27.731353 kubelet[2504]: E0314 00:44:27.731249 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96thm" podUID="33ff238e-5cbd-4d42-b80c-67e32b8fb49d" Mar 14 00:44:27.833929 containerd[1445]: time="2026-03-14T00:44:27.833745401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:44:27.850122 kubelet[2504]: I0314 00:44:27.849983 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56bdd6bbc5-6jlsw" podStartSLOduration=6.347987468 podStartE2EDuration="8.849961164s" podCreationTimestamp="2026-03-14 00:44:19 +0000 UTC" firstStartedPulling="2026-03-14 00:44:20.080817112 +0000 UTC m=+19.482540511" lastFinishedPulling="2026-03-14 00:44:22.582790808 +0000 UTC m=+21.984514207" observedRunningTime="2026-03-14 00:44:22.839578031 +0000 UTC m=+22.241301430" watchObservedRunningTime="2026-03-14 00:44:27.849961164 +0000 UTC m=+27.251684573" Mar 14 00:44:29.731226 kubelet[2504]: E0314 00:44:29.731086 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96thm" podUID="33ff238e-5cbd-4d42-b80c-67e32b8fb49d" Mar 14 00:44:30.564931 containerd[1445]: time="2026-03-14T00:44:30.564785015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:30.566028 containerd[1445]: time="2026-03-14T00:44:30.565992132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:44:30.567755 containerd[1445]: time="2026-03-14T00:44:30.567635020Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:30.571567 containerd[1445]: time="2026-03-14T00:44:30.571291380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:30.572544 containerd[1445]: time="2026-03-14T00:44:30.572389188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.738601636s" Mar 14 00:44:30.572544 containerd[1445]: time="2026-03-14T00:44:30.572456225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:44:30.580083 containerd[1445]: time="2026-03-14T00:44:30.579916457Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:44:30.623842 containerd[1445]: time="2026-03-14T00:44:30.623752254Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944\"" Mar 14 00:44:30.624803 containerd[1445]: time="2026-03-14T00:44:30.624657499Z" level=info msg="StartContainer for \"494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944\"" Mar 14 00:44:30.681850 systemd[1]: Started cri-containerd-494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944.scope - libcontainer container 494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944. Mar 14 00:44:30.833310 containerd[1445]: time="2026-03-14T00:44:30.832827820Z" level=info msg="StartContainer for \"494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944\" returns successfully" Mar 14 00:44:31.467587 systemd[1]: cri-containerd-494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944.scope: Deactivated successfully. Mar 14 00:44:31.509023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944-rootfs.mount: Deactivated successfully. Mar 14 00:44:31.513916 containerd[1445]: time="2026-03-14T00:44:31.513833856Z" level=info msg="shim disconnected" id=494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944 namespace=k8s.io Mar 14 00:44:31.514032 containerd[1445]: time="2026-03-14T00:44:31.513917985Z" level=warning msg="cleaning up after shim disconnected" id=494e558b41ddb1846809f646cdb6cb7ec463f3a15897593281e9754165eb9944 namespace=k8s.io Mar 14 00:44:31.514032 containerd[1445]: time="2026-03-14T00:44:31.513935449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:44:31.522691 kubelet[2504]: I0314 00:44:31.522320 2504 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 14 00:44:31.534569 containerd[1445]: time="2026-03-14T00:44:31.534440759Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:44:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:44:31.587450 systemd[1]: Created slice kubepods-besteffort-pod3831536c_b543_4e9f_9a7f_69e237b512a5.slice - libcontainer container kubepods-besteffort-pod3831536c_b543_4e9f_9a7f_69e237b512a5.slice. Mar 14 00:44:31.597071 systemd[1]: Created slice kubepods-besteffort-pod749bfef9_76e0_4e7a_aa4c_68e01c2e1c20.slice - libcontainer container kubepods-besteffort-pod749bfef9_76e0_4e7a_aa4c_68e01c2e1c20.slice. Mar 14 00:44:31.606992 systemd[1]: Created slice kubepods-besteffort-pod2b88dcf5_44e7_404f_8042_caf40cd3a058.slice - libcontainer container kubepods-besteffort-pod2b88dcf5_44e7_404f_8042_caf40cd3a058.slice. Mar 14 00:44:31.613839 systemd[1]: Created slice kubepods-burstable-pod7c9fbbc2_3c55_454a_b11a_04b92d39d42f.slice - libcontainer container kubepods-burstable-pod7c9fbbc2_3c55_454a_b11a_04b92d39d42f.slice. Mar 14 00:44:31.619553 systemd[1]: Created slice kubepods-besteffort-poda0072c78_1437_4491_bedf_c69885c50e4d.slice - libcontainer container kubepods-besteffort-poda0072c78_1437_4491_bedf_c69885c50e4d.slice. Mar 14 00:44:31.625882 systemd[1]: Created slice kubepods-besteffort-pod82c3e5a8_80d3_44f5_9eff_3b2203436fbc.slice - libcontainer container kubepods-besteffort-pod82c3e5a8_80d3_44f5_9eff_3b2203436fbc.slice. Mar 14 00:44:31.634921 systemd[1]: Created slice kubepods-burstable-pod10165863_412c_4604_94c2_a3af60a284e9.slice - libcontainer container kubepods-burstable-pod10165863_412c_4604_94c2_a3af60a284e9.slice. Mar 14 00:44:31.652355 kubelet[2504]: I0314 00:44:31.652325 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-ca-bundle\") pod \"whisker-6ffc58b654-qvwbh\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " pod="calico-system/whisker-6ffc58b654-qvwbh" Mar 14 00:44:31.653088 kubelet[2504]: I0314 00:44:31.652565 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749bfef9-76e0-4e7a-aa4c-68e01c2e1c20-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-2slxj\" (UID: \"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20\") " pod="calico-system/goldmane-cccfbd5cf-2slxj" Mar 14 00:44:31.653088 kubelet[2504]: I0314 00:44:31.652613 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr42k\" (UniqueName: \"kubernetes.io/projected/3831536c-b543-4e9f-9a7f-69e237b512a5-kube-api-access-tr42k\") pod \"calico-kube-controllers-85b988fbff-2sprk\" (UID: \"3831536c-b543-4e9f-9a7f-69e237b512a5\") " pod="calico-system/calico-kube-controllers-85b988fbff-2sprk" Mar 14 00:44:31.653088 kubelet[2504]: I0314 00:44:31.652642 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndqh\" (UniqueName: \"kubernetes.io/projected/2b88dcf5-44e7-404f-8042-caf40cd3a058-kube-api-access-kndqh\") pod \"calico-apiserver-7746bfdf9f-tcpxw\" (UID: \"2b88dcf5-44e7-404f-8042-caf40cd3a058\") " pod="calico-system/calico-apiserver-7746bfdf9f-tcpxw" Mar 14 00:44:31.653088 kubelet[2504]: I0314 00:44:31.652670 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a0072c78-1437-4491-bedf-c69885c50e4d-calico-apiserver-certs\") pod \"calico-apiserver-7746bfdf9f-ds2qp\" (UID: \"a0072c78-1437-4491-bedf-c69885c50e4d\") " pod="calico-system/calico-apiserver-7746bfdf9f-ds2qp" Mar 14 00:44:31.653088 kubelet[2504]: I0314 00:44:31.652699 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-nginx-config\") pod \"whisker-6ffc58b654-qvwbh\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " pod="calico-system/whisker-6ffc58b654-qvwbh" Mar 14 00:44:31.653557 kubelet[2504]: I0314 00:44:31.652726 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flx4\" (UniqueName: \"kubernetes.io/projected/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-kube-api-access-7flx4\") pod \"whisker-6ffc58b654-qvwbh\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " pod="calico-system/whisker-6ffc58b654-qvwbh" Mar 14 00:44:31.653557 kubelet[2504]: I0314 00:44:31.652751 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/749bfef9-76e0-4e7a-aa4c-68e01c2e1c20-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-2slxj\" (UID: \"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20\") " pod="calico-system/goldmane-cccfbd5cf-2slxj" Mar 14 00:44:31.653557 kubelet[2504]: I0314 00:44:31.652780 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3831536c-b543-4e9f-9a7f-69e237b512a5-tigera-ca-bundle\") pod \"calico-kube-controllers-85b988fbff-2sprk\" (UID: \"3831536c-b543-4e9f-9a7f-69e237b512a5\") " pod="calico-system/calico-kube-controllers-85b988fbff-2sprk" Mar 14 00:44:31.653557 kubelet[2504]: I0314 00:44:31.652808 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl9sm\" (UniqueName: \"kubernetes.io/projected/7c9fbbc2-3c55-454a-b11a-04b92d39d42f-kube-api-access-tl9sm\") pod \"coredns-66bc5c9577-wsjrn\" (UID: \"7c9fbbc2-3c55-454a-b11a-04b92d39d42f\") " pod="kube-system/coredns-66bc5c9577-wsjrn" Mar 14 00:44:31.653557 kubelet[2504]: I0314 00:44:31.652837 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgh5\" (UniqueName: \"kubernetes.io/projected/a0072c78-1437-4491-bedf-c69885c50e4d-kube-api-access-7bgh5\") pod \"calico-apiserver-7746bfdf9f-ds2qp\" (UID: \"a0072c78-1437-4491-bedf-c69885c50e4d\") " pod="calico-system/calico-apiserver-7746bfdf9f-ds2qp" Mar 14 00:44:31.653660 kubelet[2504]: I0314 00:44:31.652856 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10165863-412c-4604-94c2-a3af60a284e9-config-volume\") pod \"coredns-66bc5c9577-s677d\" (UID: \"10165863-412c-4604-94c2-a3af60a284e9\") " pod="kube-system/coredns-66bc5c9577-s677d" Mar 14 00:44:31.653660 kubelet[2504]: I0314 00:44:31.652877 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c9fbbc2-3c55-454a-b11a-04b92d39d42f-config-volume\") pod \"coredns-66bc5c9577-wsjrn\" (UID: \"7c9fbbc2-3c55-454a-b11a-04b92d39d42f\") " pod="kube-system/coredns-66bc5c9577-wsjrn" Mar 14 00:44:31.653660 kubelet[2504]: I0314 00:44:31.652897 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/749bfef9-76e0-4e7a-aa4c-68e01c2e1c20-config\") pod \"goldmane-cccfbd5cf-2slxj\" (UID: \"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20\") " pod="calico-system/goldmane-cccfbd5cf-2slxj" Mar 14 00:44:31.653660 kubelet[2504]: I0314 00:44:31.652910 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b88dcf5-44e7-404f-8042-caf40cd3a058-calico-apiserver-certs\") pod \"calico-apiserver-7746bfdf9f-tcpxw\" (UID: \"2b88dcf5-44e7-404f-8042-caf40cd3a058\") " pod="calico-system/calico-apiserver-7746bfdf9f-tcpxw" Mar 14 00:44:31.653660 kubelet[2504]: I0314 00:44:31.652932 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-backend-key-pair\") pod \"whisker-6ffc58b654-qvwbh\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " pod="calico-system/whisker-6ffc58b654-qvwbh" Mar 14 00:44:31.653768 kubelet[2504]: I0314 00:44:31.652944 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frlgf\" (UniqueName: \"kubernetes.io/projected/749bfef9-76e0-4e7a-aa4c-68e01c2e1c20-kube-api-access-frlgf\") pod \"goldmane-cccfbd5cf-2slxj\" (UID: \"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20\") " pod="calico-system/goldmane-cccfbd5cf-2slxj" Mar 14 00:44:31.653768 kubelet[2504]: I0314 00:44:31.652957 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrq4\" (UniqueName: \"kubernetes.io/projected/10165863-412c-4604-94c2-a3af60a284e9-kube-api-access-gsrq4\") pod \"coredns-66bc5c9577-s677d\" (UID: \"10165863-412c-4604-94c2-a3af60a284e9\") " pod="kube-system/coredns-66bc5c9577-s677d" Mar 14 00:44:31.738203 systemd[1]: Created slice kubepods-besteffort-pod33ff238e_5cbd_4d42_b80c_67e32b8fb49d.slice - libcontainer container kubepods-besteffort-pod33ff238e_5cbd_4d42_b80c_67e32b8fb49d.slice. Mar 14 00:44:31.744850 containerd[1445]: time="2026-03-14T00:44:31.744783113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96thm,Uid:33ff238e-5cbd-4d42-b80c-67e32b8fb49d,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:31.874586 containerd[1445]: time="2026-03-14T00:44:31.874412566Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:44:31.899578 containerd[1445]: time="2026-03-14T00:44:31.899438120Z" level=info msg="CreateContainer within sandbox \"e46f85e4511ecce4602e5a1ede7be85efb50657bfcfc1f4110f71cfe5f5ab47f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d17eeee1bd68bad3877a5fb629fe8660a63675e490cfe029b24c7ff55a2beddb\"" Mar 14 00:44:31.900229 containerd[1445]: time="2026-03-14T00:44:31.900193982Z" level=info msg="StartContainer for \"d17eeee1bd68bad3877a5fb629fe8660a63675e490cfe029b24c7ff55a2beddb\"" Mar 14 00:44:31.903227 containerd[1445]: time="2026-03-14T00:44:31.902733101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b988fbff-2sprk,Uid:3831536c-b543-4e9f-9a7f-69e237b512a5,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:31.908095 containerd[1445]: time="2026-03-14T00:44:31.907776534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2slxj,Uid:749bfef9-76e0-4e7a-aa4c-68e01c2e1c20,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:31.912548 containerd[1445]: time="2026-03-14T00:44:31.912418923Z" level=error msg="Failed to destroy network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:31.913185 containerd[1445]: time="2026-03-14T00:44:31.913079396Z" level=error msg="encountered an error cleaning up failed sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:31.913461 containerd[1445]: time="2026-03-14T00:44:31.913287330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96thm,Uid:33ff238e-5cbd-4d42-b80c-67e32b8fb49d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:31.918852 containerd[1445]: time="2026-03-14T00:44:31.918784654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-tcpxw,Uid:2b88dcf5-44e7-404f-8042-caf40cd3a058,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:31.921844 kubelet[2504]: E0314 00:44:31.921800 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:31.925121 containerd[1445]: time="2026-03-14T00:44:31.925041211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wsjrn,Uid:7c9fbbc2-3c55-454a-b11a-04b92d39d42f,Namespace:kube-system,Attempt:0,}" Mar 14 00:44:31.930056 kubelet[2504]: E0314 00:44:31.929932 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:31.930056 kubelet[2504]: E0314 00:44:31.929980 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-96thm" Mar 14 00:44:31.930056 kubelet[2504]: E0314 00:44:31.929999 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-96thm" Mar 14 00:44:31.930174 kubelet[2504]: E0314 00:44:31.930038 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-96thm_calico-system(33ff238e-5cbd-4d42-b80c-67e32b8fb49d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-96thm_calico-system(33ff238e-5cbd-4d42-b80c-67e32b8fb49d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-96thm" podUID="33ff238e-5cbd-4d42-b80c-67e32b8fb49d" Mar 14 00:44:31.934480 containerd[1445]: time="2026-03-14T00:44:31.934412389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ffc58b654-qvwbh,Uid:82c3e5a8-80d3-44f5-9eff-3b2203436fbc,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:31.934845 containerd[1445]: time="2026-03-14T00:44:31.934662237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-ds2qp,Uid:a0072c78-1437-4491-bedf-c69885c50e4d,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:31.943356 kubelet[2504]: E0314 00:44:31.943231 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:31.944976 containerd[1445]: time="2026-03-14T00:44:31.944847602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s677d,Uid:10165863-412c-4604-94c2-a3af60a284e9,Namespace:kube-system,Attempt:0,}" Mar 14 00:44:31.948757 systemd[1]: Started cri-containerd-d17eeee1bd68bad3877a5fb629fe8660a63675e490cfe029b24c7ff55a2beddb.scope - libcontainer container d17eeee1bd68bad3877a5fb629fe8660a63675e490cfe029b24c7ff55a2beddb. Mar 14 00:44:32.030569 containerd[1445]: time="2026-03-14T00:44:32.030391025Z" level=info msg="StartContainer for \"d17eeee1bd68bad3877a5fb629fe8660a63675e490cfe029b24c7ff55a2beddb\" returns successfully" Mar 14 00:44:32.085754 containerd[1445]: time="2026-03-14T00:44:32.085705347Z" level=error msg="Failed to destroy network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.087207 containerd[1445]: time="2026-03-14T00:44:32.087174661Z" level=error msg="encountered an error cleaning up failed sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.087632 containerd[1445]: time="2026-03-14T00:44:32.087605146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b988fbff-2sprk,Uid:3831536c-b543-4e9f-9a7f-69e237b512a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.088433 kubelet[2504]: E0314 00:44:32.087982 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.088433 kubelet[2504]: E0314 00:44:32.088032 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b988fbff-2sprk" Mar 14 00:44:32.088433 kubelet[2504]: E0314 00:44:32.088051 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b988fbff-2sprk" Mar 14 00:44:32.088630 kubelet[2504]: E0314 00:44:32.088110 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85b988fbff-2sprk_calico-system(3831536c-b543-4e9f-9a7f-69e237b512a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85b988fbff-2sprk_calico-system(3831536c-b543-4e9f-9a7f-69e237b512a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85b988fbff-2sprk" podUID="3831536c-b543-4e9f-9a7f-69e237b512a5" Mar 14 00:44:32.152331 containerd[1445]: time="2026-03-14T00:44:32.152286619Z" level=error msg="Failed to destroy network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.157119 containerd[1445]: time="2026-03-14T00:44:32.157083789Z" level=error msg="encountered an error cleaning up failed sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.157569 containerd[1445]: time="2026-03-14T00:44:32.157540264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-ds2qp,Uid:a0072c78-1437-4491-bedf-c69885c50e4d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.158406 kubelet[2504]: E0314 00:44:32.158028 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.158406 kubelet[2504]: E0314 00:44:32.158077 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7746bfdf9f-ds2qp" Mar 14 00:44:32.158406 kubelet[2504]: E0314 00:44:32.158096 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7746bfdf9f-ds2qp" Mar 14 00:44:32.158584 kubelet[2504]: E0314 00:44:32.158138 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7746bfdf9f-ds2qp_calico-system(a0072c78-1437-4491-bedf-c69885c50e4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7746bfdf9f-ds2qp_calico-system(a0072c78-1437-4491-bedf-c69885c50e4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7746bfdf9f-ds2qp" podUID="a0072c78-1437-4491-bedf-c69885c50e4d" Mar 14 00:44:32.160705 containerd[1445]: time="2026-03-14T00:44:32.160673647Z" level=error msg="Failed to destroy network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.162315 containerd[1445]: time="2026-03-14T00:44:32.162191352Z" level=error msg="encountered an error cleaning up failed sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.163033 containerd[1445]: time="2026-03-14T00:44:32.162703513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2slxj,Uid:749bfef9-76e0-4e7a-aa4c-68e01c2e1c20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.163932 kubelet[2504]: E0314 00:44:32.163534 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.163932 kubelet[2504]: E0314 00:44:32.163571 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2slxj" Mar 14 00:44:32.163932 kubelet[2504]: E0314 00:44:32.163588 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2slxj" Mar 14 00:44:32.164029 kubelet[2504]: E0314 00:44:32.163661 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-2slxj_calico-system(749bfef9-76e0-4e7a-aa4c-68e01c2e1c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-2slxj_calico-system(749bfef9-76e0-4e7a-aa4c-68e01c2e1c20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-2slxj" podUID="749bfef9-76e0-4e7a-aa4c-68e01c2e1c20" Mar 14 00:44:32.173140 containerd[1445]: time="2026-03-14T00:44:32.173062992Z" level=error msg="Failed to destroy network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.174551 containerd[1445]: time="2026-03-14T00:44:32.174471892Z" level=error msg="encountered an error cleaning up failed sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.174796 containerd[1445]: time="2026-03-14T00:44:32.174698051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s677d,Uid:10165863-412c-4604-94c2-a3af60a284e9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.175287 kubelet[2504]: E0314 00:44:32.175227 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.175797 kubelet[2504]: E0314 00:44:32.175409 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s677d" Mar 14 00:44:32.175797 kubelet[2504]: E0314 00:44:32.175477 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s677d" Mar 14 00:44:32.175797 kubelet[2504]: E0314 00:44:32.175566 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-s677d_kube-system(10165863-412c-4604-94c2-a3af60a284e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-s677d_kube-system(10165863-412c-4604-94c2-a3af60a284e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-s677d" podUID="10165863-412c-4604-94c2-a3af60a284e9" Mar 14 00:44:32.179765 containerd[1445]: time="2026-03-14T00:44:32.179602619Z" level=error msg="Failed to destroy network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.180156 containerd[1445]: time="2026-03-14T00:44:32.180022636Z" level=error msg="encountered an error cleaning up failed sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.180156 containerd[1445]: time="2026-03-14T00:44:32.180060327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wsjrn,Uid:7c9fbbc2-3c55-454a-b11a-04b92d39d42f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.180595 kubelet[2504]: E0314 00:44:32.180541 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.180648 kubelet[2504]: E0314 00:44:32.180611 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wsjrn" Mar 14 00:44:32.182085 kubelet[2504]: E0314 00:44:32.180634 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wsjrn" Mar 14 00:44:32.183797 kubelet[2504]: E0314 00:44:32.182143 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wsjrn_kube-system(7c9fbbc2-3c55-454a-b11a-04b92d39d42f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wsjrn_kube-system(7c9fbbc2-3c55-454a-b11a-04b92d39d42f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wsjrn" podUID="7c9fbbc2-3c55-454a-b11a-04b92d39d42f" Mar 14 00:44:32.189643 containerd[1445]: time="2026-03-14T00:44:32.188667723Z" level=error msg="Failed to destroy network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.189643 containerd[1445]: time="2026-03-14T00:44:32.189155998Z" level=error msg="encountered an error cleaning up failed sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.189643 containerd[1445]: time="2026-03-14T00:44:32.189197237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-tcpxw,Uid:2b88dcf5-44e7-404f-8042-caf40cd3a058,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.189974 kubelet[2504]: E0314 00:44:32.189422 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.189974 kubelet[2504]: E0314 00:44:32.189467 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7746bfdf9f-tcpxw" Mar 14 00:44:32.189974 kubelet[2504]: E0314 00:44:32.189545 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7746bfdf9f-tcpxw" Mar 14 00:44:32.190102 kubelet[2504]: E0314 00:44:32.189588 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7746bfdf9f-tcpxw_calico-system(2b88dcf5-44e7-404f-8042-caf40cd3a058)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7746bfdf9f-tcpxw_calico-system(2b88dcf5-44e7-404f-8042-caf40cd3a058)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7746bfdf9f-tcpxw" podUID="2b88dcf5-44e7-404f-8042-caf40cd3a058" Mar 14 00:44:32.192794 containerd[1445]: time="2026-03-14T00:44:32.192691044Z" level=error msg="Failed to destroy network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.193312 containerd[1445]: time="2026-03-14T00:44:32.193171776Z" level=error msg="encountered an error cleaning up failed sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.193312 containerd[1445]: time="2026-03-14T00:44:32.193294288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ffc58b654-qvwbh,Uid:82c3e5a8-80d3-44f5-9eff-3b2203436fbc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.193660 kubelet[2504]: E0314 00:44:32.193609 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:44:32.193660 kubelet[2504]: E0314 00:44:32.193659 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ffc58b654-qvwbh" Mar 14 00:44:32.193660 kubelet[2504]: E0314 00:44:32.193673 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ffc58b654-qvwbh" Mar 14 00:44:32.193885 kubelet[2504]: E0314 00:44:32.193706 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6ffc58b654-qvwbh_calico-system(82c3e5a8-80d3-44f5-9eff-3b2203436fbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6ffc58b654-qvwbh_calico-system(82c3e5a8-80d3-44f5-9eff-3b2203436fbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ffc58b654-qvwbh" podUID="82c3e5a8-80d3-44f5-9eff-3b2203436fbc" Mar 14 00:44:32.765253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e-shm.mount: Deactivated successfully. Mar 14 00:44:32.855310 kubelet[2504]: I0314 00:44:32.855166 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:44:32.857678 kubelet[2504]: I0314 00:44:32.856944 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:44:32.859748 kubelet[2504]: I0314 00:44:32.859699 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:44:32.862385 kubelet[2504]: I0314 00:44:32.862330 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:44:32.869449 containerd[1445]: time="2026-03-14T00:44:32.869371691Z" level=info msg="StopPodSandbox for \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\"" Mar 14 00:44:32.870251 containerd[1445]: time="2026-03-14T00:44:32.869370669Z" level=info msg="StopPodSandbox for \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\"" Mar 14 00:44:32.875551 containerd[1445]: time="2026-03-14T00:44:32.871880756Z" level=info msg="Ensure that sandbox 6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435 in task-service has been cleanup successfully" Mar 14 00:44:32.875759 kubelet[2504]: I0314 00:44:32.875147 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:44:32.879404 containerd[1445]: time="2026-03-14T00:44:32.878799295Z" level=info msg="StopPodSandbox for \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\"" Mar 14 00:44:32.879404 containerd[1445]: time="2026-03-14T00:44:32.879195085Z" level=info msg="Ensure that sandbox c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d in task-service has been cleanup successfully" Mar 14 00:44:32.880945 containerd[1445]: time="2026-03-14T00:44:32.880886608Z" level=info msg="Ensure that sandbox 25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651 in task-service has been cleanup successfully" Mar 14 00:44:32.881072 containerd[1445]: time="2026-03-14T00:44:32.881041975Z" level=info msg="StopPodSandbox for \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\"" Mar 14 00:44:32.882817 containerd[1445]: time="2026-03-14T00:44:32.881229961Z" level=info msg="Ensure that sandbox f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c in task-service has been cleanup successfully" Mar 14 00:44:32.883083 containerd[1445]: time="2026-03-14T00:44:32.883056843Z" level=info msg="StopPodSandbox for \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\"" Mar 14 00:44:32.883457 containerd[1445]: time="2026-03-14T00:44:32.883435460Z" level=info msg="Ensure that sandbox 86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e in task-service has been cleanup successfully" Mar 14 00:44:32.883769 kubelet[2504]: I0314 00:44:32.883642 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:44:32.885767 containerd[1445]: time="2026-03-14T00:44:32.885717712Z" level=info msg="StopPodSandbox for \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\"" Mar 14 00:44:32.885952 containerd[1445]: time="2026-03-14T00:44:32.885903825Z" level=info msg="Ensure that sandbox f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa in task-service has been cleanup successfully" Mar 14 00:44:32.889824 kubelet[2504]: I0314 00:44:32.889260 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:44:32.891250 containerd[1445]: time="2026-03-14T00:44:32.891147972Z" level=info msg="StopPodSandbox for \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\"" Mar 14 00:44:32.891530 containerd[1445]: time="2026-03-14T00:44:32.891359683Z" level=info msg="Ensure that sandbox 761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e in task-service has been cleanup successfully" Mar 14 00:44:32.913122 kubelet[2504]: I0314 00:44:32.913093 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:44:32.918040 kubelet[2504]: I0314 00:44:32.917962 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8v7c6" podStartSLOduration=3.413803304 podStartE2EDuration="13.917947523s" podCreationTimestamp="2026-03-14 00:44:19 +0000 UTC" firstStartedPulling="2026-03-14 00:44:20.070269916 +0000 UTC m=+19.471993316" lastFinishedPulling="2026-03-14 00:44:30.574414136 +0000 UTC m=+29.976137535" observedRunningTime="2026-03-14 00:44:32.916326614 +0000 UTC m=+32.318050014" watchObservedRunningTime="2026-03-14 00:44:32.917947523 +0000 UTC m=+32.319670922" Mar 14 00:44:32.920342 containerd[1445]: time="2026-03-14T00:44:32.919427340Z" level=info msg="StopPodSandbox for \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\"" Mar 14 00:44:32.920342 containerd[1445]: time="2026-03-14T00:44:32.919684207Z" level=info msg="Ensure that sandbox 97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395 in task-service has been cleanup successfully" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.002 [INFO][3692] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.002 [INFO][3692] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" iface="eth0" netns="/var/run/netns/cni-e64b25be-5eae-047a-2137-314d1547a66c" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.002 [INFO][3692] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" iface="eth0" netns="/var/run/netns/cni-e64b25be-5eae-047a-2137-314d1547a66c" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.004 [INFO][3692] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" iface="eth0" netns="/var/run/netns/cni-e64b25be-5eae-047a-2137-314d1547a66c" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.004 [INFO][3692] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.004 [INFO][3692] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.078 [INFO][3812] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.079 [INFO][3812] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.079 [INFO][3812] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.095 [WARNING][3812] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.099 [INFO][3812] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.104 [INFO][3812] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.141429 containerd[1445]: 2026-03-14 00:44:33.128 [INFO][3692] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:44:33.145906 containerd[1445]: time="2026-03-14T00:44:33.143676491Z" level=info msg="TearDown network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\" successfully" Mar 14 00:44:33.145906 containerd[1445]: time="2026-03-14T00:44:33.143705597Z" level=info msg="StopPodSandbox for \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\" returns successfully" Mar 14 00:44:33.147165 systemd[1]: run-netns-cni\x2de64b25be\x2d5eae\x2d047a\x2d2137\x2d314d1547a66c.mount: Deactivated successfully. Mar 14 00:44:33.152235 containerd[1445]: time="2026-03-14T00:44:33.152087617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-tcpxw,Uid:2b88dcf5-44e7-404f-8042-caf40cd3a058,Namespace:calico-system,Attempt:1,}" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.037 [INFO][3750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.037 [INFO][3750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" iface="eth0" netns="/var/run/netns/cni-be93a19f-c8df-cda0-cb1a-a6d1e7238184" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.037 [INFO][3750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" iface="eth0" netns="/var/run/netns/cni-be93a19f-c8df-cda0-cb1a-a6d1e7238184" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.038 [INFO][3750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" iface="eth0" netns="/var/run/netns/cni-be93a19f-c8df-cda0-cb1a-a6d1e7238184" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.038 [INFO][3750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.038 [INFO][3750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.113 [INFO][3830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.114 [INFO][3830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.114 [INFO][3830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.127 [WARNING][3830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.127 [INFO][3830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.129 [INFO][3830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.152472 containerd[1445]: 2026-03-14 00:44:33.137 [INFO][3750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:44:33.154355 containerd[1445]: time="2026-03-14T00:44:33.153882030Z" level=info msg="TearDown network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\" successfully" Mar 14 00:44:33.154998 containerd[1445]: time="2026-03-14T00:44:33.154621450Z" level=info msg="StopPodSandbox for \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\" returns successfully" Mar 14 00:44:33.159870 systemd[1]: run-netns-cni\x2dbe93a19f\x2dc8df\x2dcda0\x2dcb1a\x2da6d1e7238184.mount: Deactivated successfully. Mar 14 00:44:33.162469 containerd[1445]: time="2026-03-14T00:44:33.162373999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96thm,Uid:33ff238e-5cbd-4d42-b80c-67e32b8fb49d,Namespace:calico-system,Attempt:1,}" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.019 [INFO][3716] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.019 [INFO][3716] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" iface="eth0" netns="/var/run/netns/cni-a45604dd-4e58-93f1-96f8-abbf653c65cd" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.019 [INFO][3716] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" iface="eth0" netns="/var/run/netns/cni-a45604dd-4e58-93f1-96f8-abbf653c65cd" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.020 [INFO][3716] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" iface="eth0" netns="/var/run/netns/cni-a45604dd-4e58-93f1-96f8-abbf653c65cd" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.020 [INFO][3716] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.020 [INFO][3716] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.147 [INFO][3821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.147 [INFO][3821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.147 [INFO][3821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.157 [WARNING][3821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.157 [INFO][3821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.160 [INFO][3821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.180353 containerd[1445]: 2026-03-14 00:44:33.167 [INFO][3716] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:44:33.181319 containerd[1445]: time="2026-03-14T00:44:33.180999978Z" level=info msg="TearDown network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\" successfully" Mar 14 00:44:33.181319 containerd[1445]: time="2026-03-14T00:44:33.181027309Z" level=info msg="StopPodSandbox for \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\" returns successfully" Mar 14 00:44:33.187579 kubelet[2504]: E0314 00:44:33.184344 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:33.186605 systemd[1]: run-netns-cni\x2da45604dd\x2d4e58\x2d93f1\x2d96f8\x2dabbf653c65cd.mount: Deactivated successfully. Mar 14 00:44:33.187776 containerd[1445]: time="2026-03-14T00:44:33.185792160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wsjrn,Uid:7c9fbbc2-3c55-454a-b11a-04b92d39d42f,Namespace:kube-system,Attempt:1,}" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.047 [INFO][3731] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.051 [INFO][3731] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" iface="eth0" netns="/var/run/netns/cni-6871b368-70da-2c02-605e-57587a3bdb2e" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.052 [INFO][3731] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" iface="eth0" netns="/var/run/netns/cni-6871b368-70da-2c02-605e-57587a3bdb2e" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.070 [INFO][3731] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" iface="eth0" netns="/var/run/netns/cni-6871b368-70da-2c02-605e-57587a3bdb2e" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.072 [INFO][3731] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.079 [INFO][3731] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.167 [INFO][3846] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.168 [INFO][3846] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.168 [INFO][3846] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.178 [WARNING][3846] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.178 [INFO][3846] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.184 [INFO][3846] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.227622 containerd[1445]: 2026-03-14 00:44:33.199 [INFO][3731] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:44:33.228459 containerd[1445]: time="2026-03-14T00:44:33.228423341Z" level=info msg="TearDown network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\" successfully" Mar 14 00:44:33.228611 containerd[1445]: time="2026-03-14T00:44:33.228594446Z" level=info msg="StopPodSandbox for \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\" returns successfully" Mar 14 00:44:33.233155 containerd[1445]: time="2026-03-14T00:44:33.233134204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-ds2qp,Uid:a0072c78-1437-4491-bedf-c69885c50e4d,Namespace:calico-system,Attempt:1,}" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.075 [INFO][3761] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.075 [INFO][3761] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" iface="eth0" netns="/var/run/netns/cni-ec78dc96-cd4c-28b7-21cd-19c23ddc8589" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.080 [INFO][3761] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" iface="eth0" netns="/var/run/netns/cni-ec78dc96-cd4c-28b7-21cd-19c23ddc8589" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.089 [INFO][3761] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" iface="eth0" netns="/var/run/netns/cni-ec78dc96-cd4c-28b7-21cd-19c23ddc8589" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.089 [INFO][3761] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.089 [INFO][3761] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.206 [INFO][3860] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.206 [INFO][3860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.206 [INFO][3860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.216 [WARNING][3860] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.217 [INFO][3860] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.221 [INFO][3860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.261760 containerd[1445]: 2026-03-14 00:44:33.230 [INFO][3761] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:44:33.262740 containerd[1445]: time="2026-03-14T00:44:33.262655944Z" level=info msg="TearDown network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\" successfully" Mar 14 00:44:33.262846 containerd[1445]: time="2026-03-14T00:44:33.262830194Z" level=info msg="StopPodSandbox for \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\" returns successfully" Mar 14 00:44:33.267564 kubelet[2504]: E0314 00:44:33.265710 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:33.274775 containerd[1445]: time="2026-03-14T00:44:33.274717824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s677d,Uid:10165863-412c-4604-94c2-a3af60a284e9,Namespace:kube-system,Attempt:1,}" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.083 [INFO][3773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.084 [INFO][3773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" iface="eth0" netns="/var/run/netns/cni-7a0b68bc-ac4c-4d95-0b78-a8eefbcdf84e" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.086 [INFO][3773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" iface="eth0" netns="/var/run/netns/cni-7a0b68bc-ac4c-4d95-0b78-a8eefbcdf84e" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.091 [INFO][3773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" iface="eth0" netns="/var/run/netns/cni-7a0b68bc-ac4c-4d95-0b78-a8eefbcdf84e" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.094 [INFO][3773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.094 [INFO][3773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.218 [INFO][3859] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.218 [INFO][3859] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.221 [INFO][3859] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.236 [WARNING][3859] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.236 [INFO][3859] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.241 [INFO][3859] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.275026 containerd[1445]: 2026-03-14 00:44:33.260 [INFO][3773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:44:33.276536 containerd[1445]: time="2026-03-14T00:44:33.275831347Z" level=info msg="TearDown network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\" successfully" Mar 14 00:44:33.276536 containerd[1445]: time="2026-03-14T00:44:33.275863627Z" level=info msg="StopPodSandbox for \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\" returns successfully" Mar 14 00:44:33.284934 containerd[1445]: time="2026-03-14T00:44:33.284419181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b988fbff-2sprk,Uid:3831536c-b543-4e9f-9a7f-69e237b512a5,Namespace:calico-system,Attempt:1,}" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.095 [INFO][3788] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.096 [INFO][3788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" iface="eth0" netns="/var/run/netns/cni-c8a3bdd8-edf2-f6c1-635d-18ecc31caf23" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.097 [INFO][3788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" iface="eth0" netns="/var/run/netns/cni-c8a3bdd8-edf2-f6c1-635d-18ecc31caf23" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.101 [INFO][3788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" iface="eth0" netns="/var/run/netns/cni-c8a3bdd8-edf2-f6c1-635d-18ecc31caf23" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.101 [INFO][3788] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.101 [INFO][3788] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.238 [INFO][3865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.238 [INFO][3865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.257 [INFO][3865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.277 [WARNING][3865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.278 [INFO][3865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.283 [INFO][3865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.292262 containerd[1445]: 2026-03-14 00:44:33.287 [INFO][3788] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:44:33.292699 containerd[1445]: time="2026-03-14T00:44:33.292384442Z" level=info msg="TearDown network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\" successfully" Mar 14 00:44:33.292699 containerd[1445]: time="2026-03-14T00:44:33.292404058Z" level=info msg="StopPodSandbox for \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\" returns successfully" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.127 [INFO][3725] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.129 [INFO][3725] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" iface="eth0" netns="/var/run/netns/cni-9f5ef3e4-bd9b-180f-c207-fb76033a1794" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.132 [INFO][3725] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" iface="eth0" netns="/var/run/netns/cni-9f5ef3e4-bd9b-180f-c207-fb76033a1794" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.133 [INFO][3725] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" iface="eth0" netns="/var/run/netns/cni-9f5ef3e4-bd9b-180f-c207-fb76033a1794" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.133 [INFO][3725] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.133 [INFO][3725] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.256 [INFO][3879] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.259 [INFO][3879] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.284 [INFO][3879] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.307 [WARNING][3879] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.307 [INFO][3879] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.309 [INFO][3879] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.317560 containerd[1445]: 2026-03-14 00:44:33.312 [INFO][3725] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:44:33.318665 containerd[1445]: time="2026-03-14T00:44:33.318438770Z" level=info msg="TearDown network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\" successfully" Mar 14 00:44:33.318665 containerd[1445]: time="2026-03-14T00:44:33.318540102Z" level=info msg="StopPodSandbox for \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\" returns successfully" Mar 14 00:44:33.324949 containerd[1445]: time="2026-03-14T00:44:33.324621028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2slxj,Uid:749bfef9-76e0-4e7a-aa4c-68e01c2e1c20,Namespace:calico-system,Attempt:1,}" Mar 14 00:44:33.366818 kubelet[2504]: I0314 00:44:33.366723 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-nginx-config\") pod \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " Mar 14 00:44:33.366965 kubelet[2504]: I0314 00:44:33.366840 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-ca-bundle\") pod \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " Mar 14 00:44:33.366965 kubelet[2504]: I0314 00:44:33.366881 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7flx4\" (UniqueName: \"kubernetes.io/projected/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-kube-api-access-7flx4\") pod \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " Mar 14 00:44:33.366965 kubelet[2504]: I0314 00:44:33.366934 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-backend-key-pair\") pod \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\" (UID: \"82c3e5a8-80d3-44f5-9eff-3b2203436fbc\") " Mar 14 00:44:33.368316 kubelet[2504]: I0314 00:44:33.368197 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "82c3e5a8-80d3-44f5-9eff-3b2203436fbc" (UID: "82c3e5a8-80d3-44f5-9eff-3b2203436fbc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:44:33.372319 kubelet[2504]: I0314 00:44:33.370226 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "82c3e5a8-80d3-44f5-9eff-3b2203436fbc" (UID: "82c3e5a8-80d3-44f5-9eff-3b2203436fbc"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:44:33.375459 kubelet[2504]: I0314 00:44:33.375389 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-kube-api-access-7flx4" (OuterVolumeSpecName: "kube-api-access-7flx4") pod "82c3e5a8-80d3-44f5-9eff-3b2203436fbc" (UID: "82c3e5a8-80d3-44f5-9eff-3b2203436fbc"). InnerVolumeSpecName "kube-api-access-7flx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:44:33.383860 kubelet[2504]: I0314 00:44:33.383763 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "82c3e5a8-80d3-44f5-9eff-3b2203436fbc" (UID: "82c3e5a8-80d3-44f5-9eff-3b2203436fbc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:44:33.468831 kubelet[2504]: I0314 00:44:33.468048 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 14 00:44:33.468831 kubelet[2504]: I0314 00:44:33.468079 2504 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 14 00:44:33.468831 kubelet[2504]: I0314 00:44:33.468095 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 14 00:44:33.468831 kubelet[2504]: I0314 00:44:33.468103 2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7flx4\" (UniqueName: \"kubernetes.io/projected/82c3e5a8-80d3-44f5-9eff-3b2203436fbc-kube-api-access-7flx4\") on node \"localhost\" DevicePath \"\"" Mar 14 00:44:33.530887 systemd-networkd[1388]: calif6af40ce07a: Link UP Mar 14 00:44:33.531241 systemd-networkd[1388]: calif6af40ce07a: Gained carrier Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.258 [ERROR][3919] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.290 [INFO][3919] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wsjrn-eth0 coredns-66bc5c9577- kube-system 7c9fbbc2-3c55-454a-b11a-04b92d39d42f 933 0 2026-03-14 00:44:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wsjrn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif6af40ce07a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.291 [INFO][3919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.428 [INFO][3950] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" HandleID="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.439 [INFO][3950] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" HandleID="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wsjrn", "timestamp":"2026-03-14 00:44:33.42880194 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004e11e0)} Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.439 [INFO][3950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.439 [INFO][3950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.439 [INFO][3950] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.443 [INFO][3950] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.456 [INFO][3950] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.465 [INFO][3950] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.470 [INFO][3950] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.478 [INFO][3950] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.478 [INFO][3950] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.482 [INFO][3950] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.491 [INFO][3950] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.499 [INFO][3950] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.499 [INFO][3950] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" host="localhost" Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.499 [INFO][3950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.547732 containerd[1445]: 2026-03-14 00:44:33.499 [INFO][3950] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" HandleID="k8s-pod-network.cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.549248 containerd[1445]: 2026-03-14 00:44:33.509 [INFO][3919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wsjrn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c9fbbc2-3c55-454a-b11a-04b92d39d42f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wsjrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6af40ce07a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.549248 containerd[1445]: 2026-03-14 00:44:33.509 [INFO][3919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.549248 containerd[1445]: 2026-03-14 00:44:33.509 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6af40ce07a ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.549248 containerd[1445]: 2026-03-14 00:44:33.530 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.549248 containerd[1445]: 2026-03-14 00:44:33.531 [INFO][3919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wsjrn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c9fbbc2-3c55-454a-b11a-04b92d39d42f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa", Pod:"coredns-66bc5c9577-wsjrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6af40ce07a", MAC:"7a:c6:c4:8f:03:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.549248 containerd[1445]: 2026-03-14 00:44:33.541 [INFO][3919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa" Namespace="kube-system" Pod="coredns-66bc5c9577-wsjrn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:44:33.609112 containerd[1445]: time="2026-03-14T00:44:33.608728799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:33.609112 containerd[1445]: time="2026-03-14T00:44:33.608790405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:33.609112 containerd[1445]: time="2026-03-14T00:44:33.608803661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:33.609112 containerd[1445]: time="2026-03-14T00:44:33.608896145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:33.609897 systemd-networkd[1388]: cali4894f1e9d5c: Link UP Mar 14 00:44:33.610917 systemd-networkd[1388]: cali4894f1e9d5c: Gained carrier Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.347 [ERROR][3962] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.364 [INFO][3962] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--s677d-eth0 coredns-66bc5c9577- kube-system 10165863-412c-4604-94c2-a3af60a284e9 936 0 2026-03-14 00:44:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-s677d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4894f1e9d5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.365 [INFO][3962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.457 [INFO][3994] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" HandleID="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.475 [INFO][3994] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" HandleID="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-s677d", "timestamp":"2026-03-14 00:44:33.457904441 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000404580)} Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.476 [INFO][3994] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.500 [INFO][3994] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.500 [INFO][3994] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.546 [INFO][3994] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.558 [INFO][3994] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.566 [INFO][3994] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.572 [INFO][3994] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.575 [INFO][3994] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.575 [INFO][3994] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.578 [INFO][3994] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4 Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.593 [INFO][3994] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.600 [INFO][3994] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.600 [INFO][3994] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" host="localhost" Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.601 [INFO][3994] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.648142 containerd[1445]: 2026-03-14 00:44:33.601 [INFO][3994] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" HandleID="k8s-pod-network.dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.648851 containerd[1445]: 2026-03-14 00:44:33.605 [INFO][3962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--s677d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"10165863-412c-4604-94c2-a3af60a284e9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-s677d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4894f1e9d5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.648851 containerd[1445]: 2026-03-14 00:44:33.605 [INFO][3962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.648851 containerd[1445]: 2026-03-14 00:44:33.605 [INFO][3962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4894f1e9d5c ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.648851 containerd[1445]: 2026-03-14 00:44:33.615 [INFO][3962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.648851 containerd[1445]: 2026-03-14 00:44:33.617 [INFO][3962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--s677d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"10165863-412c-4604-94c2-a3af60a284e9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4", Pod:"coredns-66bc5c9577-s677d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4894f1e9d5c", MAC:"3a:71:eb:74:86:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.648851 containerd[1445]: 2026-03-14 00:44:33.639 [INFO][3962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4" Namespace="kube-system" Pod="coredns-66bc5c9577-s677d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:44:33.654809 systemd[1]: Started cri-containerd-cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa.scope - libcontainer container cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa. Mar 14 00:44:33.697404 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:33.759821 systemd-networkd[1388]: cali459334cfcfd: Link UP Mar 14 00:44:33.761976 systemd-networkd[1388]: cali459334cfcfd: Gained carrier Mar 14 00:44:33.779882 systemd[1]: run-netns-cni\x2dec78dc96\x2dcd4c\x2d28b7\x2d21cd\x2d19c23ddc8589.mount: Deactivated successfully. Mar 14 00:44:33.779984 systemd[1]: run-netns-cni\x2d6871b368\x2d70da\x2d2c02\x2d605e\x2d57587a3bdb2e.mount: Deactivated successfully. Mar 14 00:44:33.780050 systemd[1]: run-netns-cni\x2dc8a3bdd8\x2dedf2\x2df6c1\x2d635d\x2d18ecc31caf23.mount: Deactivated successfully. Mar 14 00:44:33.780113 systemd[1]: run-netns-cni\x2d9f5ef3e4\x2dbd9b\x2d180f\x2dc207\x2dfb76033a1794.mount: Deactivated successfully. Mar 14 00:44:33.780176 systemd[1]: run-netns-cni\x2d7a0b68bc\x2dac4c\x2d4d95\x2d0b78\x2da8eefbcdf84e.mount: Deactivated successfully. Mar 14 00:44:33.780243 systemd[1]: var-lib-kubelet-pods-82c3e5a8\x2d80d3\x2d44f5\x2d9eff\x2d3b2203436fbc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7flx4.mount: Deactivated successfully. Mar 14 00:44:33.780355 systemd[1]: var-lib-kubelet-pods-82c3e5a8\x2d80d3\x2d44f5\x2d9eff\x2d3b2203436fbc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:44:33.795623 containerd[1445]: time="2026-03-14T00:44:33.794595079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:33.795623 containerd[1445]: time="2026-03-14T00:44:33.794692243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:33.795623 containerd[1445]: time="2026-03-14T00:44:33.794702692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:33.795623 containerd[1445]: time="2026-03-14T00:44:33.794793444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:33.811816 containerd[1445]: time="2026-03-14T00:44:33.810980966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wsjrn,Uid:7c9fbbc2-3c55-454a-b11a-04b92d39d42f,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa\"" Mar 14 00:44:33.830927 kubelet[2504]: E0314 00:44:33.828735 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:33.833719 systemd[1]: Started cri-containerd-dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4.scope - libcontainer container dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4. Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.344 [ERROR][3889] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.366 [INFO][3889] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0 calico-apiserver-7746bfdf9f- calico-system 2b88dcf5-44e7-404f-8042-caf40cd3a058 932 0 2026-03-14 00:44:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7746bfdf9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7746bfdf9f-tcpxw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali459334cfcfd [] [] }} ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.367 [INFO][3889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.483 [INFO][4002] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" HandleID="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.500 [INFO][4002] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" HandleID="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7746bfdf9f-tcpxw", "timestamp":"2026-03-14 00:44:33.483406669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000345760)} Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.500 [INFO][4002] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.601 [INFO][4002] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.601 [INFO][4002] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.646 [INFO][4002] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.667 [INFO][4002] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.687 [INFO][4002] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.690 [INFO][4002] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.698 [INFO][4002] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.700 [INFO][4002] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.704 [INFO][4002] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.714 [INFO][4002] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.724 [INFO][4002] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.724 [INFO][4002] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" host="localhost" Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.724 [INFO][4002] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.840172 containerd[1445]: 2026-03-14 00:44:33.724 [INFO][4002] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" HandleID="k8s-pod-network.59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.841421 containerd[1445]: 2026-03-14 00:44:33.739 [INFO][3889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"2b88dcf5-44e7-404f-8042-caf40cd3a058", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7746bfdf9f-tcpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali459334cfcfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.841421 containerd[1445]: 2026-03-14 00:44:33.739 [INFO][3889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.841421 containerd[1445]: 2026-03-14 00:44:33.739 [INFO][3889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali459334cfcfd ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.841421 containerd[1445]: 2026-03-14 00:44:33.764 [INFO][3889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.841421 containerd[1445]: 2026-03-14 00:44:33.765 [INFO][3889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"2b88dcf5-44e7-404f-8042-caf40cd3a058", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f", Pod:"calico-apiserver-7746bfdf9f-tcpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali459334cfcfd", MAC:"c2:02:93:b4:11:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.841421 containerd[1445]: 2026-03-14 00:44:33.801 [INFO][3889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-tcpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:44:33.847011 containerd[1445]: time="2026-03-14T00:44:33.845840867Z" level=info msg="CreateContainer within sandbox \"cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:44:33.862942 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:33.872279 systemd-networkd[1388]: cali7c03a6b746d: Link UP Mar 14 00:44:33.875818 systemd-networkd[1388]: cali7c03a6b746d: Gained carrier Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.367 [ERROR][3898] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.395 [INFO][3898] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--96thm-eth0 csi-node-driver- calico-system 33ff238e-5cbd-4d42-b80c-67e32b8fb49d 934 0 2026-03-14 00:44:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-96thm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7c03a6b746d [] [] }} ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.396 [INFO][3898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.487 [INFO][4021] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" HandleID="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.501 [INFO][4021] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" HandleID="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Workload="localhost-k8s-csi--node--driver--96thm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-96thm", "timestamp":"2026-03-14 00:44:33.487037385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030d340)} Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.501 [INFO][4021] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.725 [INFO][4021] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.726 [INFO][4021] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.760 [INFO][4021] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.796 [INFO][4021] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.811 [INFO][4021] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.817 [INFO][4021] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.821 [INFO][4021] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.826 [INFO][4021] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.829 [INFO][4021] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.838 [INFO][4021] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.860 [INFO][4021] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.861 [INFO][4021] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" host="localhost" Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.861 [INFO][4021] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:33.921578 containerd[1445]: 2026-03-14 00:44:33.861 [INFO][4021] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" HandleID="k8s-pod-network.dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.922871 containerd[1445]: 2026-03-14 00:44:33.865 [INFO][3898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96thm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33ff238e-5cbd-4d42-b80c-67e32b8fb49d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-96thm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c03a6b746d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.922871 containerd[1445]: 2026-03-14 00:44:33.865 [INFO][3898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.922871 containerd[1445]: 2026-03-14 00:44:33.865 [INFO][3898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c03a6b746d ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.922871 containerd[1445]: 2026-03-14 00:44:33.877 [INFO][3898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.922871 containerd[1445]: 2026-03-14 00:44:33.878 [INFO][3898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96thm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33ff238e-5cbd-4d42-b80c-67e32b8fb49d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d", Pod:"csi-node-driver-96thm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c03a6b746d", MAC:"02:21:00:70:1f:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:33.922871 containerd[1445]: 2026-03-14 00:44:33.914 [INFO][3898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d" Namespace="calico-system" Pod="csi-node-driver-96thm" WorkloadEndpoint="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:44:33.950241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995476243.mount: Deactivated successfully. Mar 14 00:44:33.963794 containerd[1445]: time="2026-03-14T00:44:33.963764773Z" level=info msg="CreateContainer within sandbox \"cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e212b4c65a551e4e6dab05ab4c08d3113b90080715ae6ea508c06e9bd580fe3d\"" Mar 14 00:44:33.964940 systemd[1]: Removed slice kubepods-besteffort-pod82c3e5a8_80d3_44f5_9eff_3b2203436fbc.slice - libcontainer container kubepods-besteffort-pod82c3e5a8_80d3_44f5_9eff_3b2203436fbc.slice. Mar 14 00:44:33.968596 containerd[1445]: time="2026-03-14T00:44:33.967123071Z" level=info msg="StartContainer for \"e212b4c65a551e4e6dab05ab4c08d3113b90080715ae6ea508c06e9bd580fe3d\"" Mar 14 00:44:33.980873 containerd[1445]: time="2026-03-14T00:44:33.980709180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s677d,Uid:10165863-412c-4604-94c2-a3af60a284e9,Namespace:kube-system,Attempt:1,} returns sandbox id \"dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4\"" Mar 14 00:44:33.987549 kubelet[2504]: E0314 00:44:33.987230 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:34.008105 containerd[1445]: time="2026-03-14T00:44:34.006224141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:34.008105 containerd[1445]: time="2026-03-14T00:44:34.006400194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:34.008105 containerd[1445]: time="2026-03-14T00:44:34.006418178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.008105 containerd[1445]: time="2026-03-14T00:44:34.006661719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.043664 containerd[1445]: time="2026-03-14T00:44:34.041833378Z" level=info msg="CreateContainer within sandbox \"dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:44:34.063107 systemd[1]: Created slice kubepods-besteffort-poda4bbb940_f658_4743_8732_ca0e1e2e4f7c.slice - libcontainer container kubepods-besteffort-poda4bbb940_f658_4743_8732_ca0e1e2e4f7c.slice. Mar 14 00:44:34.073198 kubelet[2504]: I0314 00:44:34.072963 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4bbb940-f658-4743-8732-ca0e1e2e4f7c-whisker-ca-bundle\") pod \"whisker-6dd5c87855-chcll\" (UID: \"a4bbb940-f658-4743-8732-ca0e1e2e4f7c\") " pod="calico-system/whisker-6dd5c87855-chcll" Mar 14 00:44:34.073198 kubelet[2504]: I0314 00:44:34.073025 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4496b\" (UniqueName: \"kubernetes.io/projected/a4bbb940-f658-4743-8732-ca0e1e2e4f7c-kube-api-access-4496b\") pod \"whisker-6dd5c87855-chcll\" (UID: \"a4bbb940-f658-4743-8732-ca0e1e2e4f7c\") " pod="calico-system/whisker-6dd5c87855-chcll" Mar 14 00:44:34.073198 kubelet[2504]: I0314 00:44:34.073053 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4bbb940-f658-4743-8732-ca0e1e2e4f7c-nginx-config\") pod \"whisker-6dd5c87855-chcll\" (UID: \"a4bbb940-f658-4743-8732-ca0e1e2e4f7c\") " pod="calico-system/whisker-6dd5c87855-chcll" Mar 14 00:44:34.073198 kubelet[2504]: I0314 00:44:34.073079 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4bbb940-f658-4743-8732-ca0e1e2e4f7c-whisker-backend-key-pair\") pod \"whisker-6dd5c87855-chcll\" (UID: \"a4bbb940-f658-4743-8732-ca0e1e2e4f7c\") " pod="calico-system/whisker-6dd5c87855-chcll" Mar 14 00:44:34.087999 systemd-networkd[1388]: cali95152bf783a: Link UP Mar 14 00:44:34.091086 systemd-networkd[1388]: cali95152bf783a: Gained carrier Mar 14 00:44:34.094882 containerd[1445]: time="2026-03-14T00:44:34.094789626Z" level=info msg="CreateContainer within sandbox \"dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f85981cf5dda48ead1f5f0d96998e539c171a55ca25c31e655fc0d95575f3a63\"" Mar 14 00:44:34.098075 containerd[1445]: time="2026-03-14T00:44:34.096576393Z" level=info msg="StartContainer for \"f85981cf5dda48ead1f5f0d96998e539c171a55ca25c31e655fc0d95575f3a63\"" Mar 14 00:44:34.132602 systemd[1]: Started cri-containerd-59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f.scope - libcontainer container 59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f. Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.397 [ERROR][3969] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.418 [INFO][3969] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0 calico-kube-controllers-85b988fbff- calico-system 3831536c-b543-4e9f-9a7f-69e237b512a5 937 0 2026-03-14 00:44:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85b988fbff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85b988fbff-2sprk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali95152bf783a [] [] }} ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.418 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.495 [INFO][4033] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" HandleID="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.507 [INFO][4033] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" HandleID="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85b988fbff-2sprk", "timestamp":"2026-03-14 00:44:33.495911773 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005fedc0)} Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.507 [INFO][4033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.862 [INFO][4033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.862 [INFO][4033] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.868 [INFO][4033] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.904 [INFO][4033] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.918 [INFO][4033] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.934 [INFO][4033] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.959 [INFO][4033] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.959 [INFO][4033] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.975 [INFO][4033] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:33.999 [INFO][4033] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:34.026 [INFO][4033] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:34.039 [INFO][4033] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" host="localhost" Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:34.039 [INFO][4033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:34.138403 containerd[1445]: 2026-03-14 00:44:34.041 [INFO][4033] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" HandleID="k8s-pod-network.d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:34.140480 containerd[1445]: 2026-03-14 00:44:34.063 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0", GenerateName:"calico-kube-controllers-85b988fbff-", Namespace:"calico-system", SelfLink:"", UID:"3831536c-b543-4e9f-9a7f-69e237b512a5", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b988fbff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85b988fbff-2sprk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95152bf783a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.140480 containerd[1445]: 2026-03-14 00:44:34.063 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:34.140480 containerd[1445]: 2026-03-14 00:44:34.063 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95152bf783a ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:34.140480 containerd[1445]: 2026-03-14 00:44:34.093 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:34.140480 containerd[1445]: 2026-03-14 00:44:34.094 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0", GenerateName:"calico-kube-controllers-85b988fbff-", Namespace:"calico-system", SelfLink:"", UID:"3831536c-b543-4e9f-9a7f-69e237b512a5", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b988fbff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a", Pod:"calico-kube-controllers-85b988fbff-2sprk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95152bf783a", MAC:"0a:6e:12:c6:7c:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.140480 containerd[1445]: 2026-03-14 00:44:34.121 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a" Namespace="calico-system" Pod="calico-kube-controllers-85b988fbff-2sprk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:44:34.154209 containerd[1445]: time="2026-03-14T00:44:34.153739481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:34.154209 containerd[1445]: time="2026-03-14T00:44:34.153790436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:34.154209 containerd[1445]: time="2026-03-14T00:44:34.153802119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.154209 containerd[1445]: time="2026-03-14T00:44:34.153876059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.184121 systemd-networkd[1388]: cali20206297317: Link UP Mar 14 00:44:34.187375 systemd-networkd[1388]: cali20206297317: Gained carrier Mar 14 00:44:34.195362 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:34.213851 systemd[1]: Started cri-containerd-e212b4c65a551e4e6dab05ab4c08d3113b90080715ae6ea508c06e9bd580fe3d.scope - libcontainer container e212b4c65a551e4e6dab05ab4c08d3113b90080715ae6ea508c06e9bd580fe3d. Mar 14 00:44:34.219700 systemd[1]: Started cri-containerd-f85981cf5dda48ead1f5f0d96998e539c171a55ca25c31e655fc0d95575f3a63.scope - libcontainer container f85981cf5dda48ead1f5f0d96998e539c171a55ca25c31e655fc0d95575f3a63. Mar 14 00:44:34.232467 systemd[1]: Started cri-containerd-dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d.scope - libcontainer container dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d. Mar 14 00:44:34.232639 containerd[1445]: time="2026-03-14T00:44:34.231538980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:34.232639 containerd[1445]: time="2026-03-14T00:44:34.231603232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:34.232639 containerd[1445]: time="2026-03-14T00:44:34.231617208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.232639 containerd[1445]: time="2026-03-14T00:44:34.231705174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:33.340 [ERROR][3937] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:33.381 [INFO][3937] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0 calico-apiserver-7746bfdf9f- calico-system a0072c78-1437-4491-bedf-c69885c50e4d 935 0 2026-03-14 00:44:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7746bfdf9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7746bfdf9f-ds2qp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali20206297317 [] [] }} ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:33.383 [INFO][3937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:33.492 [INFO][4018] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" HandleID="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:33.517 [INFO][4018] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" HandleID="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7746bfdf9f-ds2qp", "timestamp":"2026-03-14 00:44:33.492118815 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000740160)} Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:33.517 [INFO][4018] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.043 [INFO][4018] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.043 [INFO][4018] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.065 [INFO][4018] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.123 [INFO][4018] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.138 [INFO][4018] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.141 [INFO][4018] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.145 [INFO][4018] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.148 [INFO][4018] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.151 [INFO][4018] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6 Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.159 [INFO][4018] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.168 [INFO][4018] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.168 [INFO][4018] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" host="localhost" Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.170 [INFO][4018] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:34.237811 containerd[1445]: 2026-03-14 00:44:34.170 [INFO][4018] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" HandleID="k8s-pod-network.b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:34.238403 containerd[1445]: 2026-03-14 00:44:34.174 [INFO][3937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"a0072c78-1437-4491-bedf-c69885c50e4d", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7746bfdf9f-ds2qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20206297317", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.238403 containerd[1445]: 2026-03-14 00:44:34.174 [INFO][3937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:34.238403 containerd[1445]: 2026-03-14 00:44:34.174 [INFO][3937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20206297317 ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:34.238403 containerd[1445]: 2026-03-14 00:44:34.190 [INFO][3937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:34.238403 containerd[1445]: 2026-03-14 00:44:34.190 [INFO][3937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"a0072c78-1437-4491-bedf-c69885c50e4d", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6", Pod:"calico-apiserver-7746bfdf9f-ds2qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20206297317", MAC:"46:40:6b:37:ff:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.238403 containerd[1445]: 2026-03-14 00:44:34.224 [INFO][3937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6" Namespace="calico-system" Pod="calico-apiserver-7746bfdf9f-ds2qp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:44:34.287714 systemd[1]: Started cri-containerd-d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a.scope - libcontainer container d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a. Mar 14 00:44:34.299938 containerd[1445]: time="2026-03-14T00:44:34.299791234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-tcpxw,Uid:2b88dcf5-44e7-404f-8042-caf40cd3a058,Namespace:calico-system,Attempt:1,} returns sandbox id \"59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f\"" Mar 14 00:44:34.306633 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:34.318222 containerd[1445]: time="2026-03-14T00:44:34.318190206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:44:34.384478 containerd[1445]: time="2026-03-14T00:44:34.375707123Z" level=info msg="StartContainer for \"e212b4c65a551e4e6dab05ab4c08d3113b90080715ae6ea508c06e9bd580fe3d\" returns successfully" Mar 14 00:44:34.386740 systemd-networkd[1388]: cali70c41b549c6: Link UP Mar 14 00:44:34.387360 containerd[1445]: time="2026-03-14T00:44:34.387279098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd5c87855-chcll,Uid:a4bbb940-f658-4743-8732-ca0e1e2e4f7c,Namespace:calico-system,Attempt:0,}" Mar 14 00:44:34.391811 systemd-networkd[1388]: cali70c41b549c6: Gained carrier Mar 14 00:44:34.392579 containerd[1445]: time="2026-03-14T00:44:34.391859920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96thm,Uid:33ff238e-5cbd-4d42-b80c-67e32b8fb49d,Namespace:calico-system,Attempt:1,} returns sandbox id \"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d\"" Mar 14 00:44:34.398415 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:34.418918 containerd[1445]: time="2026-03-14T00:44:34.418789096Z" level=info msg="StartContainer for \"f85981cf5dda48ead1f5f0d96998e539c171a55ca25c31e655fc0d95575f3a63\" returns successfully" Mar 14 00:44:34.430224 containerd[1445]: time="2026-03-14T00:44:34.429815079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:34.430224 containerd[1445]: time="2026-03-14T00:44:34.430118444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:34.430224 containerd[1445]: time="2026-03-14T00:44:34.430131518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.431292 containerd[1445]: time="2026-03-14T00:44:34.430460732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:33.479 [ERROR][3999] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:33.518 [INFO][3999] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0 goldmane-cccfbd5cf- calico-system 749bfef9-76e0-4e7a-aa4c-68e01c2e1c20 939 0 2026-03-14 00:44:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-2slxj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali70c41b549c6 [] [] }} ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:33.518 [INFO][3999] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:33.604 [INFO][4057] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" HandleID="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:33.621 [INFO][4057] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" HandleID="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-2slxj", "timestamp":"2026-03-14 00:44:33.604207687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001702c0)} Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:33.621 [INFO][4057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.169 [INFO][4057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.169 [INFO][4057] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.183 [INFO][4057] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.232 [INFO][4057] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.269 [INFO][4057] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.277 [INFO][4057] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.288 [INFO][4057] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.288 [INFO][4057] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.305 [INFO][4057] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06 Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.318 [INFO][4057] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.341 [INFO][4057] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.341 [INFO][4057] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" host="localhost" Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.342 [INFO][4057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:34.443404 containerd[1445]: 2026-03-14 00:44:34.342 [INFO][4057] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" HandleID="k8s-pod-network.2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:34.444786 containerd[1445]: 2026-03-14 00:44:34.380 [INFO][3999] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-2slxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70c41b549c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.444786 containerd[1445]: 2026-03-14 00:44:34.380 [INFO][3999] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:34.444786 containerd[1445]: 2026-03-14 00:44:34.380 [INFO][3999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70c41b549c6 ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:34.444786 containerd[1445]: 2026-03-14 00:44:34.393 [INFO][3999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:34.444786 containerd[1445]: 2026-03-14 00:44:34.394 [INFO][3999] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06", Pod:"goldmane-cccfbd5cf-2slxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70c41b549c6", MAC:"82:21:b4:ee:e8:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.444786 containerd[1445]: 2026-03-14 00:44:34.433 [INFO][3999] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2slxj" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:44:34.489748 systemd[1]: Started cri-containerd-b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6.scope - libcontainer container b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6. Mar 14 00:44:34.507993 containerd[1445]: time="2026-03-14T00:44:34.507927409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b988fbff-2sprk,Uid:3831536c-b543-4e9f-9a7f-69e237b512a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a\"" Mar 14 00:44:34.521988 containerd[1445]: time="2026-03-14T00:44:34.521690510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:34.522586 containerd[1445]: time="2026-03-14T00:44:34.522173274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:34.523751 containerd[1445]: time="2026-03-14T00:44:34.522774685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.531671 containerd[1445]: time="2026-03-14T00:44:34.528926390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.537700 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:34.579890 systemd[1]: Started cri-containerd-2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06.scope - libcontainer container 2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06. Mar 14 00:44:34.604700 containerd[1445]: time="2026-03-14T00:44:34.604399180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7746bfdf9f-ds2qp,Uid:a0072c78-1437-4491-bedf-c69885c50e4d,Namespace:calico-system,Attempt:1,} returns sandbox id \"b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6\"" Mar 14 00:44:34.619585 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:34.655945 containerd[1445]: time="2026-03-14T00:44:34.655740367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2slxj,Uid:749bfef9-76e0-4e7a-aa4c-68e01c2e1c20,Namespace:calico-system,Attempt:1,} returns sandbox id \"2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06\"" Mar 14 00:44:34.689192 systemd-networkd[1388]: calieb9eaa1ee2e: Link UP Mar 14 00:44:34.690870 systemd-networkd[1388]: calieb9eaa1ee2e: Gained carrier Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.520 [ERROR][4544] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.545 [INFO][4544] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6dd5c87855--chcll-eth0 whisker-6dd5c87855- calico-system a4bbb940-f658-4743-8732-ca0e1e2e4f7c 977 0 2026-03-14 00:44:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dd5c87855 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6dd5c87855-chcll eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieb9eaa1ee2e [] [] }} ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.545 [INFO][4544] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-eth0" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.630 [INFO][4617] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" HandleID="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Workload="localhost-k8s-whisker--6dd5c87855--chcll-eth0" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.639 [INFO][4617] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" HandleID="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Workload="localhost-k8s-whisker--6dd5c87855--chcll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000431670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6dd5c87855-chcll", "timestamp":"2026-03-14 00:44:34.630273091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003aedc0)} Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.639 [INFO][4617] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.639 [INFO][4617] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.639 [INFO][4617] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.642 [INFO][4617] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.648 [INFO][4617] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.653 [INFO][4617] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.656 [INFO][4617] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.660 [INFO][4617] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.661 [INFO][4617] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.664 [INFO][4617] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1 Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.669 [INFO][4617] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.679 [INFO][4617] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.679 [INFO][4617] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" host="localhost" Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.680 [INFO][4617] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:44:34.712376 containerd[1445]: 2026-03-14 00:44:34.680 [INFO][4617] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" HandleID="k8s-pod-network.31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Workload="localhost-k8s-whisker--6dd5c87855--chcll-eth0" Mar 14 00:44:34.713449 containerd[1445]: 2026-03-14 00:44:34.685 [INFO][4544] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dd5c87855--chcll-eth0", GenerateName:"whisker-6dd5c87855-", Namespace:"calico-system", SelfLink:"", UID:"a4bbb940-f658-4743-8732-ca0e1e2e4f7c", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dd5c87855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6dd5c87855-chcll", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb9eaa1ee2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.713449 containerd[1445]: 2026-03-14 00:44:34.685 [INFO][4544] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-eth0" Mar 14 00:44:34.713449 containerd[1445]: 2026-03-14 00:44:34.685 [INFO][4544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb9eaa1ee2e ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-eth0" Mar 14 00:44:34.713449 containerd[1445]: 2026-03-14 00:44:34.691 [INFO][4544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-eth0" Mar 14 00:44:34.713449 containerd[1445]: 2026-03-14 00:44:34.692 [INFO][4544] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dd5c87855--chcll-eth0", GenerateName:"whisker-6dd5c87855-", Namespace:"calico-system", SelfLink:"", UID:"a4bbb940-f658-4743-8732-ca0e1e2e4f7c", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dd5c87855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1", Pod:"whisker-6dd5c87855-chcll", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb9eaa1ee2e", MAC:"42:9e:57:ec:51:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:44:34.713449 containerd[1445]: 2026-03-14 00:44:34.705 [INFO][4544] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1" Namespace="calico-system" Pod="whisker-6dd5c87855-chcll" WorkloadEndpoint="localhost-k8s-whisker--6dd5c87855--chcll-eth0" Mar 14 00:44:34.737276 kubelet[2504]: I0314 00:44:34.737051 2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c3e5a8-80d3-44f5-9eff-3b2203436fbc" path="/var/lib/kubelet/pods/82c3e5a8-80d3-44f5-9eff-3b2203436fbc/volumes" Mar 14 00:44:34.745900 containerd[1445]: time="2026-03-14T00:44:34.745618872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:44:34.745900 containerd[1445]: time="2026-03-14T00:44:34.745858015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:44:34.745900 containerd[1445]: time="2026-03-14T00:44:34.745873735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.746586 containerd[1445]: time="2026-03-14T00:44:34.745965909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:44:34.799736 systemd[1]: Started cri-containerd-31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1.scope - libcontainer container 31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1. Mar 14 00:44:34.817096 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:44:34.846704 systemd-networkd[1388]: cali4894f1e9d5c: Gained IPv6LL Mar 14 00:44:34.873950 containerd[1445]: time="2026-03-14T00:44:34.868839407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd5c87855-chcll,Uid:a4bbb940-f658-4743-8732-ca0e1e2e4f7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1\"" Mar 14 00:44:34.968138 kubelet[2504]: E0314 00:44:34.968110 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:34.990453 kubelet[2504]: E0314 00:44:34.990396 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:35.040118 kubelet[2504]: I0314 00:44:35.040069 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wsjrn" podStartSLOduration=28.04005637 podStartE2EDuration="28.04005637s" podCreationTimestamp="2026-03-14 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:44:35.03981292 +0000 UTC m=+34.441536319" watchObservedRunningTime="2026-03-14 00:44:35.04005637 +0000 UTC m=+34.441779770" Mar 14 00:44:35.040753 kubelet[2504]: I0314 00:44:35.040727 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s677d" podStartSLOduration=28.040720197 podStartE2EDuration="28.040720197s" podCreationTimestamp="2026-03-14 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:44:35.021175577 +0000 UTC m=+34.422898975" watchObservedRunningTime="2026-03-14 00:44:35.040720197 +0000 UTC m=+34.442443596" Mar 14 00:44:35.422737 systemd-networkd[1388]: cali95152bf783a: Gained IPv6LL Mar 14 00:44:35.424063 systemd-networkd[1388]: cali20206297317: Gained IPv6LL Mar 14 00:44:35.425162 systemd-networkd[1388]: cali459334cfcfd: Gained IPv6LL Mar 14 00:44:35.614946 systemd-networkd[1388]: calif6af40ce07a: Gained IPv6LL Mar 14 00:44:35.678769 systemd-networkd[1388]: cali7c03a6b746d: Gained IPv6LL Mar 14 00:44:35.774836 containerd[1445]: time="2026-03-14T00:44:35.774761129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:35.776070 containerd[1445]: time="2026-03-14T00:44:35.775976769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:44:35.777524 containerd[1445]: time="2026-03-14T00:44:35.777401408Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:35.780316 containerd[1445]: time="2026-03-14T00:44:35.780261169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:35.781018 containerd[1445]: time="2026-03-14T00:44:35.780968303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.462296023s" Mar 14 00:44:35.781063 containerd[1445]: time="2026-03-14T00:44:35.781014932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:44:35.782254 containerd[1445]: time="2026-03-14T00:44:35.782235181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:44:35.787779 containerd[1445]: time="2026-03-14T00:44:35.787733086Z" level=info msg="CreateContainer within sandbox \"59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:44:35.817403 containerd[1445]: time="2026-03-14T00:44:35.817276350Z" level=info msg="CreateContainer within sandbox \"59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc8a131c0270e2c8bd72af78f17cfbf8284bdabff1014293b0b10e93b2a1e0ae\"" Mar 14 00:44:35.818825 containerd[1445]: time="2026-03-14T00:44:35.818657187Z" level=info msg="StartContainer for \"cc8a131c0270e2c8bd72af78f17cfbf8284bdabff1014293b0b10e93b2a1e0ae\"" Mar 14 00:44:35.869761 systemd[1]: Started cri-containerd-cc8a131c0270e2c8bd72af78f17cfbf8284bdabff1014293b0b10e93b2a1e0ae.scope - libcontainer container cc8a131c0270e2c8bd72af78f17cfbf8284bdabff1014293b0b10e93b2a1e0ae. Mar 14 00:44:35.919466 containerd[1445]: time="2026-03-14T00:44:35.919314374Z" level=info msg="StartContainer for \"cc8a131c0270e2c8bd72af78f17cfbf8284bdabff1014293b0b10e93b2a1e0ae\" returns successfully" Mar 14 00:44:36.040752 kubelet[2504]: E0314 00:44:36.038756 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:36.040752 kubelet[2504]: E0314 00:44:36.039441 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:36.054926 kubelet[2504]: I0314 00:44:36.054032 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7746bfdf9f-tcpxw" podStartSLOduration=16.576945478 podStartE2EDuration="18.05401758s" podCreationTimestamp="2026-03-14 00:44:18 +0000 UTC" firstStartedPulling="2026-03-14 00:44:34.303879191 +0000 UTC m=+33.706795348" lastFinishedPulling="2026-03-14 00:44:35.782144051 +0000 UTC m=+35.183867450" observedRunningTime="2026-03-14 00:44:36.051551905 +0000 UTC m=+35.453275304" watchObservedRunningTime="2026-03-14 00:44:36.05401758 +0000 UTC m=+35.455740979" Mar 14 00:44:36.383685 systemd-networkd[1388]: cali70c41b549c6: Gained IPv6LL Mar 14 00:44:36.489683 containerd[1445]: time="2026-03-14T00:44:36.489542773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:36.490835 containerd[1445]: time="2026-03-14T00:44:36.490726040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:44:36.492416 containerd[1445]: time="2026-03-14T00:44:36.492289707Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:36.495652 containerd[1445]: time="2026-03-14T00:44:36.495567270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:36.497086 containerd[1445]: time="2026-03-14T00:44:36.497033178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 714.687989ms" Mar 14 00:44:36.497170 containerd[1445]: time="2026-03-14T00:44:36.497093462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:44:36.499263 containerd[1445]: time="2026-03-14T00:44:36.499107214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:44:36.504110 containerd[1445]: time="2026-03-14T00:44:36.504029375Z" level=info msg="CreateContainer within sandbox \"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:44:36.524229 containerd[1445]: time="2026-03-14T00:44:36.524125627Z" level=info msg="CreateContainer within sandbox \"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5c4ca74a7388fe16e0d8ef9d7c63ca6245f4f38807248cbfe664b4a69eb06d86\"" Mar 14 00:44:36.526017 containerd[1445]: time="2026-03-14T00:44:36.525061336Z" level=info msg="StartContainer for \"5c4ca74a7388fe16e0d8ef9d7c63ca6245f4f38807248cbfe664b4a69eb06d86\"" Mar 14 00:44:36.561751 systemd[1]: Started cri-containerd-5c4ca74a7388fe16e0d8ef9d7c63ca6245f4f38807248cbfe664b4a69eb06d86.scope - libcontainer container 5c4ca74a7388fe16e0d8ef9d7c63ca6245f4f38807248cbfe664b4a69eb06d86. Mar 14 00:44:36.600624 containerd[1445]: time="2026-03-14T00:44:36.600553856Z" level=info msg="StartContainer for \"5c4ca74a7388fe16e0d8ef9d7c63ca6245f4f38807248cbfe664b4a69eb06d86\" returns successfully" Mar 14 00:44:36.702811 systemd-networkd[1388]: calieb9eaa1ee2e: Gained IPv6LL Mar 14 00:44:37.048838 kubelet[2504]: E0314 00:44:37.048601 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:37.048838 kubelet[2504]: E0314 00:44:37.048720 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:37.048838 kubelet[2504]: I0314 00:44:37.048791 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:44:38.037146 containerd[1445]: time="2026-03-14T00:44:38.037012045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:38.038145 containerd[1445]: time="2026-03-14T00:44:38.038057901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:44:38.040159 containerd[1445]: time="2026-03-14T00:44:38.040041259Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:38.043381 containerd[1445]: time="2026-03-14T00:44:38.043155946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:38.044447 containerd[1445]: time="2026-03-14T00:44:38.044325632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.545149138s" Mar 14 00:44:38.044541 containerd[1445]: time="2026-03-14T00:44:38.044399702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:44:38.046153 containerd[1445]: time="2026-03-14T00:44:38.046097324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:44:38.056137 kubelet[2504]: E0314 00:44:38.055076 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:38.062601 containerd[1445]: time="2026-03-14T00:44:38.062359873Z" level=info msg="CreateContainer within sandbox \"d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:44:38.078655 containerd[1445]: time="2026-03-14T00:44:38.078545313Z" level=info msg="CreateContainer within sandbox \"d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f2b5f291695507fb5841341cb11359fdf80fd8e7e051808ce499f6d8b00d6627\"" Mar 14 00:44:38.079135 containerd[1445]: time="2026-03-14T00:44:38.079005192Z" level=info msg="StartContainer for \"f2b5f291695507fb5841341cb11359fdf80fd8e7e051808ce499f6d8b00d6627\"" Mar 14 00:44:38.122718 systemd[1]: Started cri-containerd-f2b5f291695507fb5841341cb11359fdf80fd8e7e051808ce499f6d8b00d6627.scope - libcontainer container f2b5f291695507fb5841341cb11359fdf80fd8e7e051808ce499f6d8b00d6627. Mar 14 00:44:38.176990 containerd[1445]: time="2026-03-14T00:44:38.176899027Z" level=info msg="StartContainer for \"f2b5f291695507fb5841341cb11359fdf80fd8e7e051808ce499f6d8b00d6627\" returns successfully" Mar 14 00:44:38.177309 containerd[1445]: time="2026-03-14T00:44:38.177091654Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:38.179774 containerd[1445]: time="2026-03-14T00:44:38.179404535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:44:38.182004 containerd[1445]: time="2026-03-14T00:44:38.181914889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 135.78808ms" Mar 14 00:44:38.182004 containerd[1445]: time="2026-03-14T00:44:38.181961137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:44:38.184720 containerd[1445]: time="2026-03-14T00:44:38.184664676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:44:38.189138 containerd[1445]: time="2026-03-14T00:44:38.189043941Z" level=info msg="CreateContainer within sandbox \"b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:44:38.203876 containerd[1445]: time="2026-03-14T00:44:38.203760698Z" level=info msg="CreateContainer within sandbox \"b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2801942d1a6bfa0aaf4c4db42226af4bdeb27258f33af15747bd78593b0524b\"" Mar 14 00:44:38.205861 containerd[1445]: time="2026-03-14T00:44:38.204705128Z" level=info msg="StartContainer for \"f2801942d1a6bfa0aaf4c4db42226af4bdeb27258f33af15747bd78593b0524b\"" Mar 14 00:44:38.247199 systemd[1]: Started cri-containerd-f2801942d1a6bfa0aaf4c4db42226af4bdeb27258f33af15747bd78593b0524b.scope - libcontainer container f2801942d1a6bfa0aaf4c4db42226af4bdeb27258f33af15747bd78593b0524b. Mar 14 00:44:38.324393 containerd[1445]: time="2026-03-14T00:44:38.324011490Z" level=info msg="StartContainer for \"f2801942d1a6bfa0aaf4c4db42226af4bdeb27258f33af15747bd78593b0524b\" returns successfully" Mar 14 00:44:39.094701 kubelet[2504]: I0314 00:44:39.094633 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7746bfdf9f-ds2qp" podStartSLOduration=17.519567465 podStartE2EDuration="21.094618091s" podCreationTimestamp="2026-03-14 00:44:18 +0000 UTC" firstStartedPulling="2026-03-14 00:44:34.608137788 +0000 UTC m=+34.009861187" lastFinishedPulling="2026-03-14 00:44:38.183188413 +0000 UTC m=+37.584911813" observedRunningTime="2026-03-14 00:44:39.077862458 +0000 UTC m=+38.479585856" watchObservedRunningTime="2026-03-14 00:44:39.094618091 +0000 UTC m=+38.496341490" Mar 14 00:44:39.097117 kubelet[2504]: I0314 00:44:39.095433 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85b988fbff-2sprk" podStartSLOduration=16.565860036 podStartE2EDuration="20.095424555s" podCreationTimestamp="2026-03-14 00:44:19 +0000 UTC" firstStartedPulling="2026-03-14 00:44:34.51596009 +0000 UTC m=+33.917683489" lastFinishedPulling="2026-03-14 00:44:38.045524609 +0000 UTC m=+37.447248008" observedRunningTime="2026-03-14 00:44:39.095366905 +0000 UTC m=+38.497090305" watchObservedRunningTime="2026-03-14 00:44:39.095424555 +0000 UTC m=+38.497147953" Mar 14 00:44:39.432256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2097626030.mount: Deactivated successfully. Mar 14 00:44:39.924133 containerd[1445]: time="2026-03-14T00:44:39.924035824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:39.925331 containerd[1445]: time="2026-03-14T00:44:39.925247729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:44:39.926923 containerd[1445]: time="2026-03-14T00:44:39.926865174Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:39.929892 containerd[1445]: time="2026-03-14T00:44:39.929765233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:39.930629 containerd[1445]: time="2026-03-14T00:44:39.930590244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.745874031s" Mar 14 00:44:39.930691 containerd[1445]: time="2026-03-14T00:44:39.930633677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:44:39.931912 containerd[1445]: time="2026-03-14T00:44:39.931866502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:44:39.935771 containerd[1445]: time="2026-03-14T00:44:39.935657542Z" level=info msg="CreateContainer within sandbox \"2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:44:39.970034 containerd[1445]: time="2026-03-14T00:44:39.969951133Z" level=info msg="CreateContainer within sandbox \"2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ef33a6ee4101d9aaef5f0ab27b6fad9f264ab3d28270936e4f41721eade15c07\"" Mar 14 00:44:39.982603 containerd[1445]: time="2026-03-14T00:44:39.982455461Z" level=info msg="StartContainer for \"ef33a6ee4101d9aaef5f0ab27b6fad9f264ab3d28270936e4f41721eade15c07\"" Mar 14 00:44:40.016943 systemd[1]: Started cri-containerd-ef33a6ee4101d9aaef5f0ab27b6fad9f264ab3d28270936e4f41721eade15c07.scope - libcontainer container ef33a6ee4101d9aaef5f0ab27b6fad9f264ab3d28270936e4f41721eade15c07. Mar 14 00:44:40.062158 containerd[1445]: time="2026-03-14T00:44:40.062118013Z" level=info msg="StartContainer for \"ef33a6ee4101d9aaef5f0ab27b6fad9f264ab3d28270936e4f41721eade15c07\" returns successfully" Mar 14 00:44:40.083582 kubelet[2504]: I0314 00:44:40.081214 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:44:40.677915 containerd[1445]: time="2026-03-14T00:44:40.677825245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:40.678846 containerd[1445]: time="2026-03-14T00:44:40.678777975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:44:40.680220 containerd[1445]: time="2026-03-14T00:44:40.680140645Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:40.683039 containerd[1445]: time="2026-03-14T00:44:40.682933573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:40.683873 containerd[1445]: time="2026-03-14T00:44:40.683821750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 751.900694ms" Mar 14 00:44:40.683873 containerd[1445]: time="2026-03-14T00:44:40.683872085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:44:40.685391 containerd[1445]: time="2026-03-14T00:44:40.685322599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:44:40.690307 containerd[1445]: time="2026-03-14T00:44:40.690254252Z" level=info msg="CreateContainer within sandbox \"31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:44:40.712242 containerd[1445]: time="2026-03-14T00:44:40.712183694Z" level=info msg="CreateContainer within sandbox \"31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a9990128b218b3ac3d1a8025e38baa1c28cda3a862d4b378f83f171f7f262664\"" Mar 14 00:44:40.714198 containerd[1445]: time="2026-03-14T00:44:40.713093403Z" level=info msg="StartContainer for \"a9990128b218b3ac3d1a8025e38baa1c28cda3a862d4b378f83f171f7f262664\"" Mar 14 00:44:40.767795 systemd[1]: Started cri-containerd-a9990128b218b3ac3d1a8025e38baa1c28cda3a862d4b378f83f171f7f262664.scope - libcontainer container a9990128b218b3ac3d1a8025e38baa1c28cda3a862d4b378f83f171f7f262664. Mar 14 00:44:40.829849 containerd[1445]: time="2026-03-14T00:44:40.829740136Z" level=info msg="StartContainer for \"a9990128b218b3ac3d1a8025e38baa1c28cda3a862d4b378f83f171f7f262664\" returns successfully" Mar 14 00:44:41.118091 kubelet[2504]: I0314 00:44:41.117994 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:44:41.119188 kubelet[2504]: E0314 00:44:41.118341 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:41.148982 kubelet[2504]: I0314 00:44:41.148841 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-2slxj" podStartSLOduration=16.875765377 podStartE2EDuration="22.148825214s" podCreationTimestamp="2026-03-14 00:44:19 +0000 UTC" firstStartedPulling="2026-03-14 00:44:34.658579 +0000 UTC m=+34.060302399" lastFinishedPulling="2026-03-14 00:44:39.931638837 +0000 UTC m=+39.333362236" observedRunningTime="2026-03-14 00:44:40.116925806 +0000 UTC m=+39.518649245" watchObservedRunningTime="2026-03-14 00:44:41.148825214 +0000 UTC m=+40.550548623" Mar 14 00:44:41.518626 kernel: calico-node[5203]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:44:42.090078 kubelet[2504]: E0314 00:44:42.089996 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:44:42.114896 systemd[1]: Started sshd@7-10.0.0.158:22-10.0.0.1:33740.service - OpenSSH per-connection server daemon (10.0.0.1:33740). Mar 14 00:44:42.225586 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 33740 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:44:42.227224 sshd[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:44:42.233915 systemd-logind[1436]: New session 8 of user core. Mar 14 00:44:42.244946 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:44:42.363434 systemd-networkd[1388]: vxlan.calico: Link UP Mar 14 00:44:42.363444 systemd-networkd[1388]: vxlan.calico: Gained carrier Mar 14 00:44:42.865461 sshd[5266]: pam_unix(sshd:session): session closed for user core Mar 14 00:44:42.870763 systemd[1]: sshd@7-10.0.0.158:22-10.0.0.1:33740.service: Deactivated successfully. Mar 14 00:44:42.872808 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:44:42.873702 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:44:42.875438 systemd-logind[1436]: Removed session 8. Mar 14 00:44:43.199654 containerd[1445]: time="2026-03-14T00:44:43.199382553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:43.200642 containerd[1445]: time="2026-03-14T00:44:43.200467879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:44:43.201848 containerd[1445]: time="2026-03-14T00:44:43.201812006Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:43.205666 containerd[1445]: time="2026-03-14T00:44:43.205624260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.520003056s" Mar 14 00:44:43.205715 containerd[1445]: time="2026-03-14T00:44:43.205671678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:44:43.208865 containerd[1445]: time="2026-03-14T00:44:43.208815811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:44:43.212244 containerd[1445]: time="2026-03-14T00:44:43.212149489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:43.215307 containerd[1445]: time="2026-03-14T00:44:43.215257105Z" level=info msg="CreateContainer within sandbox \"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:44:43.246541 containerd[1445]: time="2026-03-14T00:44:43.240472366Z" level=info msg="CreateContainer within sandbox \"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2307a3e15615ac78ac87bcd9be96e0058430034cf8912f7d7280f7cf19c71750\"" Mar 14 00:44:43.247178 containerd[1445]: time="2026-03-14T00:44:43.247154514Z" level=info msg="StartContainer for \"2307a3e15615ac78ac87bcd9be96e0058430034cf8912f7d7280f7cf19c71750\"" Mar 14 00:44:43.302852 systemd[1]: Started cri-containerd-2307a3e15615ac78ac87bcd9be96e0058430034cf8912f7d7280f7cf19c71750.scope - libcontainer container 2307a3e15615ac78ac87bcd9be96e0058430034cf8912f7d7280f7cf19c71750. Mar 14 00:44:43.339191 containerd[1445]: time="2026-03-14T00:44:43.339102405Z" level=info msg="StartContainer for \"2307a3e15615ac78ac87bcd9be96e0058430034cf8912f7d7280f7cf19c71750\" returns successfully" Mar 14 00:44:43.911247 kubelet[2504]: I0314 00:44:43.911184 2504 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:44:43.914922 kubelet[2504]: I0314 00:44:43.914832 2504 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:44:43.977897 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Mar 14 00:44:44.986668 kubelet[2504]: I0314 00:44:44.986401 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-96thm" podStartSLOduration=17.190147763 podStartE2EDuration="25.986375121s" podCreationTimestamp="2026-03-14 00:44:19 +0000 UTC" firstStartedPulling="2026-03-14 00:44:34.410452195 +0000 UTC m=+33.812175595" lastFinishedPulling="2026-03-14 00:44:43.206679554 +0000 UTC m=+42.608402953" observedRunningTime="2026-03-14 00:44:44.973474095 +0000 UTC m=+44.375197504" watchObservedRunningTime="2026-03-14 00:44:44.986375121 +0000 UTC m=+44.388098520" Mar 14 00:44:45.505166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853256978.mount: Deactivated successfully. Mar 14 00:44:45.627037 containerd[1445]: time="2026-03-14T00:44:45.626654681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:45.629105 containerd[1445]: time="2026-03-14T00:44:45.628591303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:44:45.676146 containerd[1445]: time="2026-03-14T00:44:45.675884988Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:45.686747 containerd[1445]: time="2026-03-14T00:44:45.686381447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:44:45.694035 containerd[1445]: time="2026-03-14T00:44:45.693933960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.485057305s" Mar 14 00:44:45.694035 containerd[1445]: time="2026-03-14T00:44:45.693991770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:44:45.745961 containerd[1445]: time="2026-03-14T00:44:45.745696815Z" level=info msg="CreateContainer within sandbox \"31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:44:45.820585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614913782.mount: Deactivated successfully. Mar 14 00:44:45.847797 containerd[1445]: time="2026-03-14T00:44:45.845798978Z" level=info msg="CreateContainer within sandbox \"31f9247e8a95897dc49b811fb722a546c7a65fcc6059ef4ba2f234d32102ead1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f80406b13924f1fed622ca727406d4f01b6d46f48b6ea701599f0c079df39db4\"" Mar 14 00:44:45.873328 containerd[1445]: time="2026-03-14T00:44:45.871097682Z" level=info msg="StartContainer for \"f80406b13924f1fed622ca727406d4f01b6d46f48b6ea701599f0c079df39db4\"" Mar 14 00:44:46.029832 systemd[1]: Started cri-containerd-f80406b13924f1fed622ca727406d4f01b6d46f48b6ea701599f0c079df39db4.scope - libcontainer container f80406b13924f1fed622ca727406d4f01b6d46f48b6ea701599f0c079df39db4. Mar 14 00:44:46.299722 containerd[1445]: time="2026-03-14T00:44:46.299611291Z" level=info msg="StartContainer for \"f80406b13924f1fed622ca727406d4f01b6d46f48b6ea701599f0c079df39db4\" returns successfully" Mar 14 00:44:46.982395 kubelet[2504]: I0314 00:44:46.982219 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dd5c87855-chcll" podStartSLOduration=2.158041608 podStartE2EDuration="12.98220569s" podCreationTimestamp="2026-03-14 00:44:34 +0000 UTC" firstStartedPulling="2026-03-14 00:44:34.874736496 +0000 UTC m=+34.276459894" lastFinishedPulling="2026-03-14 00:44:45.698900577 +0000 UTC m=+45.100623976" observedRunningTime="2026-03-14 00:44:46.979862497 +0000 UTC m=+46.381585897" watchObservedRunningTime="2026-03-14 00:44:46.98220569 +0000 UTC m=+46.383929090" Mar 14 00:44:47.929994 systemd[1]: Started sshd@8-10.0.0.158:22-10.0.0.1:33752.service - OpenSSH per-connection server daemon (10.0.0.1:33752). Mar 14 00:44:48.144910 kubelet[2504]: I0314 00:44:48.144165 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:44:48.146453 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 33752 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:44:48.230062 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:44:48.272741 systemd-logind[1436]: New session 9 of user core. Mar 14 00:44:48.281796 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:44:49.217894 sshd[5500]: pam_unix(sshd:session): session closed for user core Mar 14 00:44:49.231139 systemd[1]: sshd@8-10.0.0.158:22-10.0.0.1:33752.service: Deactivated successfully. Mar 14 00:44:49.282901 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:44:49.299478 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:44:49.310867 systemd-logind[1436]: Removed session 9. Mar 14 00:44:54.235598 systemd[1]: Started sshd@9-10.0.0.158:22-10.0.0.1:36174.service - OpenSSH per-connection server daemon (10.0.0.1:36174). Mar 14 00:44:54.272724 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 36174 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:44:54.274412 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:44:54.280542 systemd-logind[1436]: New session 10 of user core. Mar 14 00:44:54.289709 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:44:54.421779 sshd[5549]: pam_unix(sshd:session): session closed for user core Mar 14 00:44:54.426775 systemd[1]: sshd@9-10.0.0.158:22-10.0.0.1:36174.service: Deactivated successfully. Mar 14 00:44:54.429104 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:44:54.430055 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:44:54.431370 systemd-logind[1436]: Removed session 10. Mar 14 00:44:59.436799 systemd[1]: Started sshd@10-10.0.0.158:22-10.0.0.1:36180.service - OpenSSH per-connection server daemon (10.0.0.1:36180). Mar 14 00:44:59.473564 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 36180 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:44:59.475791 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:44:59.483331 systemd-logind[1436]: New session 11 of user core. Mar 14 00:44:59.495711 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:44:59.650086 sshd[5580]: pam_unix(sshd:session): session closed for user core Mar 14 00:44:59.659037 systemd[1]: sshd@10-10.0.0.158:22-10.0.0.1:36180.service: Deactivated successfully. Mar 14 00:44:59.661573 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:44:59.663392 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:44:59.671800 systemd[1]: Started sshd@11-10.0.0.158:22-10.0.0.1:36186.service - OpenSSH per-connection server daemon (10.0.0.1:36186). Mar 14 00:44:59.672804 systemd-logind[1436]: Removed session 11. Mar 14 00:44:59.701461 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 36186 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:44:59.703591 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:44:59.708712 systemd-logind[1436]: New session 12 of user core. Mar 14 00:44:59.716648 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:44:59.886055 sshd[5595]: pam_unix(sshd:session): session closed for user core Mar 14 00:44:59.900845 systemd[1]: sshd@11-10.0.0.158:22-10.0.0.1:36186.service: Deactivated successfully. Mar 14 00:44:59.904105 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:44:59.909128 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:44:59.918006 systemd[1]: Started sshd@12-10.0.0.158:22-10.0.0.1:36188.service - OpenSSH per-connection server daemon (10.0.0.1:36188). Mar 14 00:44:59.919809 systemd-logind[1436]: Removed session 12. Mar 14 00:44:59.946164 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 36188 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:44:59.947967 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:44:59.954737 systemd-logind[1436]: New session 13 of user core. Mar 14 00:44:59.968747 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:45:00.088150 sshd[5609]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:00.092421 systemd[1]: sshd@12-10.0.0.158:22-10.0.0.1:36188.service: Deactivated successfully. Mar 14 00:45:00.094377 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:45:00.095377 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:45:00.096807 systemd-logind[1436]: Removed session 13. Mar 14 00:45:00.715038 containerd[1445]: time="2026-03-14T00:45:00.714950906Z" level=info msg="StopPodSandbox for \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\"" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.793 [WARNING][5633] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"a0072c78-1437-4491-bedf-c69885c50e4d", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6", Pod:"calico-apiserver-7746bfdf9f-ds2qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20206297317", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.794 [INFO][5633] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.794 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" iface="eth0" netns="" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.794 [INFO][5633] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.794 [INFO][5633] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.865 [INFO][5642] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.865 [INFO][5642] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.865 [INFO][5642] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.872 [WARNING][5642] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.872 [INFO][5642] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.874 [INFO][5642] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:00.883141 containerd[1445]: 2026-03-14 00:45:00.878 [INFO][5633] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:00.894000 containerd[1445]: time="2026-03-14T00:45:00.893769061Z" level=info msg="TearDown network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\" successfully" Mar 14 00:45:00.894000 containerd[1445]: time="2026-03-14T00:45:00.893829945Z" level=info msg="StopPodSandbox for \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\" returns successfully" Mar 14 00:45:00.927435 containerd[1445]: time="2026-03-14T00:45:00.927338025Z" level=info msg="RemovePodSandbox for \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\"" Mar 14 00:45:00.930367 containerd[1445]: time="2026-03-14T00:45:00.930297954Z" level=info msg="Forcibly stopping sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\"" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:00.975 [WARNING][5659] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"a0072c78-1437-4491-bedf-c69885c50e4d", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b185038e83b6c87fbee35ee15fb084fe6d1700e5b3855ba6ad755518867e29b6", Pod:"calico-apiserver-7746bfdf9f-ds2qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20206297317", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:00.976 [INFO][5659] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:00.976 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" iface="eth0" netns="" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:00.976 [INFO][5659] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:00.976 [INFO][5659] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:01.004 [INFO][5667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:01.004 [INFO][5667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:01.004 [INFO][5667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:01.013 [WARNING][5667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:01.013 [INFO][5667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" HandleID="k8s-pod-network.f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--ds2qp-eth0" Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:01.015 [INFO][5667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.022303 containerd[1445]: 2026-03-14 00:45:01.018 [INFO][5659] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c" Mar 14 00:45:01.022303 containerd[1445]: time="2026-03-14T00:45:01.022159685Z" level=info msg="TearDown network for sandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\" successfully" Mar 14 00:45:01.068434 containerd[1445]: time="2026-03-14T00:45:01.068331784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:01.068686 containerd[1445]: time="2026-03-14T00:45:01.068445679Z" level=info msg="RemovePodSandbox \"f7a012ce1b5b2e952447ec11af0374c273c88878f49313249a8098246dfced2c\" returns successfully" Mar 14 00:45:01.076707 containerd[1445]: time="2026-03-14T00:45:01.076644825Z" level=info msg="StopPodSandbox for \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\"" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.120 [WARNING][5684] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--s677d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"10165863-412c-4604-94c2-a3af60a284e9", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4", Pod:"coredns-66bc5c9577-s677d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4894f1e9d5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.121 [INFO][5684] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.121 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" iface="eth0" netns="" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.121 [INFO][5684] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.121 [INFO][5684] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.159 [INFO][5692] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.159 [INFO][5692] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.159 [INFO][5692] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.165 [WARNING][5692] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.165 [INFO][5692] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.167 [INFO][5692] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.173867 containerd[1445]: 2026-03-14 00:45:01.170 [INFO][5684] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.173867 containerd[1445]: time="2026-03-14T00:45:01.173653528Z" level=info msg="TearDown network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\" successfully" Mar 14 00:45:01.173867 containerd[1445]: time="2026-03-14T00:45:01.173687652Z" level=info msg="StopPodSandbox for \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\" returns successfully" Mar 14 00:45:01.174785 containerd[1445]: time="2026-03-14T00:45:01.174304778Z" level=info msg="RemovePodSandbox for \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\"" Mar 14 00:45:01.174785 containerd[1445]: time="2026-03-14T00:45:01.174328934Z" level=info msg="Forcibly stopping sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\"" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.213 [WARNING][5710] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--s677d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"10165863-412c-4604-94c2-a3af60a284e9", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd373cf3f46c29a387211e677c63777a1955884d016fdf905ee4f6f765f4e3c4", Pod:"coredns-66bc5c9577-s677d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4894f1e9d5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.213 [INFO][5710] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.213 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" iface="eth0" netns="" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.213 [INFO][5710] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.213 [INFO][5710] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.243 [INFO][5718] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.244 [INFO][5718] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.244 [INFO][5718] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.250 [WARNING][5718] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.250 [INFO][5718] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" HandleID="k8s-pod-network.86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Workload="localhost-k8s-coredns--66bc5c9577--s677d-eth0" Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.252 [INFO][5718] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.257908 containerd[1445]: 2026-03-14 00:45:01.255 [INFO][5710] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e" Mar 14 00:45:01.257908 containerd[1445]: time="2026-03-14T00:45:01.257817461Z" level=info msg="TearDown network for sandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\" successfully" Mar 14 00:45:01.269069 containerd[1445]: time="2026-03-14T00:45:01.268959366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:01.269069 containerd[1445]: time="2026-03-14T00:45:01.269027926Z" level=info msg="RemovePodSandbox \"86f88070c51a8e1d90961149c26658d5d812282ee42388ee9023af0aee77747e\" returns successfully" Mar 14 00:45:01.282424 containerd[1445]: time="2026-03-14T00:45:01.282206847Z" level=info msg="StopPodSandbox for \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\"" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.324 [WARNING][5735] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" WorkloadEndpoint="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.324 [INFO][5735] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.324 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" iface="eth0" netns="" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.324 [INFO][5735] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.324 [INFO][5735] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.357 [INFO][5744] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.357 [INFO][5744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.357 [INFO][5744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.364 [WARNING][5744] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.364 [INFO][5744] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.366 [INFO][5744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.372331 containerd[1445]: 2026-03-14 00:45:01.369 [INFO][5735] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.373051 containerd[1445]: time="2026-03-14T00:45:01.372970093Z" level=info msg="TearDown network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\" successfully" Mar 14 00:45:01.373051 containerd[1445]: time="2026-03-14T00:45:01.373018865Z" level=info msg="StopPodSandbox for \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\" returns successfully" Mar 14 00:45:01.373789 containerd[1445]: time="2026-03-14T00:45:01.373731856Z" level=info msg="RemovePodSandbox for \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\"" Mar 14 00:45:01.373789 containerd[1445]: time="2026-03-14T00:45:01.373778414Z" level=info msg="Forcibly stopping sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\"" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.415 [WARNING][5762] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" WorkloadEndpoint="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.416 [INFO][5762] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.416 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" iface="eth0" netns="" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.416 [INFO][5762] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.416 [INFO][5762] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.443 [INFO][5770] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.443 [INFO][5770] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.443 [INFO][5770] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.449 [WARNING][5770] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.450 [INFO][5770] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" HandleID="k8s-pod-network.97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Workload="localhost-k8s-whisker--6ffc58b654--qvwbh-eth0" Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.451 [INFO][5770] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.459798 containerd[1445]: 2026-03-14 00:45:01.455 [INFO][5762] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395" Mar 14 00:45:01.459798 containerd[1445]: time="2026-03-14T00:45:01.457796184Z" level=info msg="TearDown network for sandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\" successfully" Mar 14 00:45:01.485446 containerd[1445]: time="2026-03-14T00:45:01.485357517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:01.485446 containerd[1445]: time="2026-03-14T00:45:01.485424903Z" level=info msg="RemovePodSandbox \"97c6ccbf90ec9b674e1f2e15439197fea8499f9f0f4b07177a0c52495e93c395\" returns successfully" Mar 14 00:45:01.485880 containerd[1445]: time="2026-03-14T00:45:01.485845788Z" level=info msg="StopPodSandbox for \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\"" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.531 [WARNING][5787] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06", Pod:"goldmane-cccfbd5cf-2slxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70c41b549c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.531 [INFO][5787] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.531 [INFO][5787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" iface="eth0" netns="" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.531 [INFO][5787] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.531 [INFO][5787] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.566 [INFO][5795] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.566 [INFO][5795] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.566 [INFO][5795] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.574 [WARNING][5795] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.575 [INFO][5795] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.577 [INFO][5795] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.582106 containerd[1445]: 2026-03-14 00:45:01.579 [INFO][5787] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.582106 containerd[1445]: time="2026-03-14T00:45:01.582080068Z" level=info msg="TearDown network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\" successfully" Mar 14 00:45:01.582106 containerd[1445]: time="2026-03-14T00:45:01.582102421Z" level=info msg="StopPodSandbox for \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\" returns successfully" Mar 14 00:45:01.583618 containerd[1445]: time="2026-03-14T00:45:01.582777861Z" level=info msg="RemovePodSandbox for \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\"" Mar 14 00:45:01.583618 containerd[1445]: time="2026-03-14T00:45:01.582806556Z" level=info msg="Forcibly stopping sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\"" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.631 [WARNING][5813] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"749bfef9-76e0-4e7a-aa4c-68e01c2e1c20", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cecc1d6f3cff4f444af3f9d4ef5c39cb2d90d86a977b2c60f53ef1fcb3b2e06", Pod:"goldmane-cccfbd5cf-2slxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70c41b549c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.631 [INFO][5813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.631 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" iface="eth0" netns="" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.631 [INFO][5813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.631 [INFO][5813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.660 [INFO][5821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.660 [INFO][5821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.660 [INFO][5821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.666 [WARNING][5821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.666 [INFO][5821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" HandleID="k8s-pod-network.25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Workload="localhost-k8s-goldmane--cccfbd5cf--2slxj-eth0" Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.668 [INFO][5821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.674061 containerd[1445]: 2026-03-14 00:45:01.671 [INFO][5813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651" Mar 14 00:45:01.674614 containerd[1445]: time="2026-03-14T00:45:01.674100796Z" level=info msg="TearDown network for sandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\" successfully" Mar 14 00:45:01.678649 containerd[1445]: time="2026-03-14T00:45:01.678610193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:01.678776 containerd[1445]: time="2026-03-14T00:45:01.678671989Z" level=info msg="RemovePodSandbox \"25417b1fee5e20bdb1169d92262f2966e9658db2139d3c411262618ca6c28651\" returns successfully" Mar 14 00:45:01.679211 containerd[1445]: time="2026-03-14T00:45:01.679190735Z" level=info msg="StopPodSandbox for \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\"" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.717 [WARNING][5841] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0", GenerateName:"calico-kube-controllers-85b988fbff-", Namespace:"calico-system", SelfLink:"", UID:"3831536c-b543-4e9f-9a7f-69e237b512a5", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b988fbff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a", Pod:"calico-kube-controllers-85b988fbff-2sprk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95152bf783a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.717 [INFO][5841] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.717 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" iface="eth0" netns="" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.717 [INFO][5841] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.717 [INFO][5841] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.744 [INFO][5849] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.744 [INFO][5849] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.744 [INFO][5849] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.752 [WARNING][5849] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.752 [INFO][5849] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.754 [INFO][5849] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.759181 containerd[1445]: 2026-03-14 00:45:01.756 [INFO][5841] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.759916 containerd[1445]: time="2026-03-14T00:45:01.759214585Z" level=info msg="TearDown network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\" successfully" Mar 14 00:45:01.759916 containerd[1445]: time="2026-03-14T00:45:01.759239532Z" level=info msg="StopPodSandbox for \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\" returns successfully" Mar 14 00:45:01.760045 containerd[1445]: time="2026-03-14T00:45:01.759920837Z" level=info msg="RemovePodSandbox for \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\"" Mar 14 00:45:01.760045 containerd[1445]: time="2026-03-14T00:45:01.759995638Z" level=info msg="Forcibly stopping sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\"" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.804 [WARNING][5866] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0", GenerateName:"calico-kube-controllers-85b988fbff-", Namespace:"calico-system", SelfLink:"", UID:"3831536c-b543-4e9f-9a7f-69e237b512a5", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b988fbff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d911866a9a17606a3c9078b02a52286059b60e25d1a39c2023aeab24cbec686a", Pod:"calico-kube-controllers-85b988fbff-2sprk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95152bf783a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.805 [INFO][5866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.805 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" iface="eth0" netns="" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.805 [INFO][5866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.805 [INFO][5866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.834 [INFO][5874] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.834 [INFO][5874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.834 [INFO][5874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.840 [WARNING][5874] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.840 [INFO][5874] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" HandleID="k8s-pod-network.f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Workload="localhost-k8s-calico--kube--controllers--85b988fbff--2sprk-eth0" Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.842 [INFO][5874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.847664 containerd[1445]: 2026-03-14 00:45:01.844 [INFO][5866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa" Mar 14 00:45:01.847664 containerd[1445]: time="2026-03-14T00:45:01.847587695Z" level=info msg="TearDown network for sandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\" successfully" Mar 14 00:45:01.852210 containerd[1445]: time="2026-03-14T00:45:01.852155921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:01.852258 containerd[1445]: time="2026-03-14T00:45:01.852221084Z" level=info msg="RemovePodSandbox \"f77a9ee920b9032587137e12342ca828049e2679256d4e1a89773d5878cb6aaa\" returns successfully" Mar 14 00:45:01.852996 containerd[1445]: time="2026-03-14T00:45:01.852925441Z" level=info msg="StopPodSandbox for \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\"" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.894 [WARNING][5890] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wsjrn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c9fbbc2-3c55-454a-b11a-04b92d39d42f", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa", Pod:"coredns-66bc5c9577-wsjrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6af40ce07a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.894 [INFO][5890] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.895 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" iface="eth0" netns="" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.895 [INFO][5890] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.895 [INFO][5890] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.921 [INFO][5898] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.921 [INFO][5898] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.921 [INFO][5898] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.929 [WARNING][5898] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.929 [INFO][5898] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.931 [INFO][5898] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:01.937219 containerd[1445]: 2026-03-14 00:45:01.934 [INFO][5890] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:01.937219 containerd[1445]: time="2026-03-14T00:45:01.937176195Z" level=info msg="TearDown network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\" successfully" Mar 14 00:45:01.937219 containerd[1445]: time="2026-03-14T00:45:01.937199890Z" level=info msg="StopPodSandbox for \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\" returns successfully" Mar 14 00:45:01.938040 containerd[1445]: time="2026-03-14T00:45:01.937984817Z" level=info msg="RemovePodSandbox for \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\"" Mar 14 00:45:01.938040 containerd[1445]: time="2026-03-14T00:45:01.938009583Z" level=info msg="Forcibly stopping sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\"" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:01.982 [WARNING][5917] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wsjrn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c9fbbc2-3c55-454a-b11a-04b92d39d42f", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfd64960b6fafbd581119259b9bff0191098345c4828a7faeed5b0922cd483aa", Pod:"coredns-66bc5c9577-wsjrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6af40ce07a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:01.983 [INFO][5917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:01.983 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" iface="eth0" netns="" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:01.983 [INFO][5917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:01.983 [INFO][5917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:02.010 [INFO][5925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:02.010 [INFO][5925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:02.010 [INFO][5925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:02.018 [WARNING][5925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:02.018 [INFO][5925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" HandleID="k8s-pod-network.c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Workload="localhost-k8s-coredns--66bc5c9577--wsjrn-eth0" Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:02.020 [INFO][5925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:02.026166 containerd[1445]: 2026-03-14 00:45:02.022 [INFO][5917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d" Mar 14 00:45:02.026631 containerd[1445]: time="2026-03-14T00:45:02.026162531Z" level=info msg="TearDown network for sandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\" successfully" Mar 14 00:45:02.031134 containerd[1445]: time="2026-03-14T00:45:02.031059168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:02.031200 containerd[1445]: time="2026-03-14T00:45:02.031171598Z" level=info msg="RemovePodSandbox \"c910b234f9bf76f5c5b2c1ece8ee43752e76271db9abc1c34e983e54f6a8164d\" returns successfully" Mar 14 00:45:02.034196 containerd[1445]: time="2026-03-14T00:45:02.032139659Z" level=info msg="StopPodSandbox for \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\"" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.074 [WARNING][5942] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"2b88dcf5-44e7-404f-8042-caf40cd3a058", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f", Pod:"calico-apiserver-7746bfdf9f-tcpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali459334cfcfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.075 [INFO][5942] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.075 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" iface="eth0" netns="" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.075 [INFO][5942] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.075 [INFO][5942] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.103 [INFO][5950] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.104 [INFO][5950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.104 [INFO][5950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.111 [WARNING][5950] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.111 [INFO][5950] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.113 [INFO][5950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:02.120003 containerd[1445]: 2026-03-14 00:45:02.116 [INFO][5942] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.120655 containerd[1445]: time="2026-03-14T00:45:02.120038053Z" level=info msg="TearDown network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\" successfully" Mar 14 00:45:02.120655 containerd[1445]: time="2026-03-14T00:45:02.120070264Z" level=info msg="StopPodSandbox for \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\" returns successfully" Mar 14 00:45:02.120928 containerd[1445]: time="2026-03-14T00:45:02.120846755Z" level=info msg="RemovePodSandbox for \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\"" Mar 14 00:45:02.120928 containerd[1445]: time="2026-03-14T00:45:02.120915424Z" level=info msg="Forcibly stopping sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\"" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.167 [WARNING][5968] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0", GenerateName:"calico-apiserver-7746bfdf9f-", Namespace:"calico-system", SelfLink:"", UID:"2b88dcf5-44e7-404f-8042-caf40cd3a058", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7746bfdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e6183e7edd3def86befe426288389942d99b052e881c1ee08879f62591678f", Pod:"calico-apiserver-7746bfdf9f-tcpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali459334cfcfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.167 [INFO][5968] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.167 [INFO][5968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" iface="eth0" netns="" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.167 [INFO][5968] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.167 [INFO][5968] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.202 [INFO][5976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.202 [INFO][5976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.202 [INFO][5976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.210 [WARNING][5976] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.210 [INFO][5976] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" HandleID="k8s-pod-network.6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Workload="localhost-k8s-calico--apiserver--7746bfdf9f--tcpxw-eth0" Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.213 [INFO][5976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:02.219434 containerd[1445]: 2026-03-14 00:45:02.215 [INFO][5968] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435" Mar 14 00:45:02.219434 containerd[1445]: time="2026-03-14T00:45:02.219398738Z" level=info msg="TearDown network for sandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\" successfully" Mar 14 00:45:02.230071 containerd[1445]: time="2026-03-14T00:45:02.229889393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:02.230071 containerd[1445]: time="2026-03-14T00:45:02.230017784Z" level=info msg="RemovePodSandbox \"6a9c8b99dd802981eedbdd1a25231bb6cef174a2df2e001f108c3876fb10d435\" returns successfully" Mar 14 00:45:02.230849 containerd[1445]: time="2026-03-14T00:45:02.230813014Z" level=info msg="StopPodSandbox for \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\"" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.281 [WARNING][5994] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96thm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33ff238e-5cbd-4d42-b80c-67e32b8fb49d", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d", Pod:"csi-node-driver-96thm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c03a6b746d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.281 [INFO][5994] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.281 [INFO][5994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" iface="eth0" netns="" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.281 [INFO][5994] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.281 [INFO][5994] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.308 [INFO][6002] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.308 [INFO][6002] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.309 [INFO][6002] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.317 [WARNING][6002] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.317 [INFO][6002] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.319 [INFO][6002] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:02.324912 containerd[1445]: 2026-03-14 00:45:02.322 [INFO][5994] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.325326 containerd[1445]: time="2026-03-14T00:45:02.325004563Z" level=info msg="TearDown network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\" successfully" Mar 14 00:45:02.325326 containerd[1445]: time="2026-03-14T00:45:02.325040972Z" level=info msg="StopPodSandbox for \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\" returns successfully" Mar 14 00:45:02.325967 containerd[1445]: time="2026-03-14T00:45:02.325830898Z" level=info msg="RemovePodSandbox for \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\"" Mar 14 00:45:02.325967 containerd[1445]: time="2026-03-14T00:45:02.325889709Z" level=info msg="Forcibly stopping sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\"" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.375 [WARNING][6021] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96thm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33ff238e-5cbd-4d42-b80c-67e32b8fb49d", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfe21ae874abfa3889cc2119e21554fef5c4c14b99aec069f07451099176d32d", Pod:"csi-node-driver-96thm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c03a6b746d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.375 [INFO][6021] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.375 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" iface="eth0" netns="" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.375 [INFO][6021] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.375 [INFO][6021] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.406 [INFO][6030] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.406 [INFO][6030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.407 [INFO][6030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.413 [WARNING][6030] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.413 [INFO][6030] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" HandleID="k8s-pod-network.761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Workload="localhost-k8s-csi--node--driver--96thm-eth0" Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.415 [INFO][6030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:45:02.422765 containerd[1445]: 2026-03-14 00:45:02.418 [INFO][6021] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e" Mar 14 00:45:02.422765 containerd[1445]: time="2026-03-14T00:45:02.420710956Z" level=info msg="TearDown network for sandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\" successfully" Mar 14 00:45:02.425649 containerd[1445]: time="2026-03-14T00:45:02.425479879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:45:02.425649 containerd[1445]: time="2026-03-14T00:45:02.425593873Z" level=info msg="RemovePodSandbox \"761f0a1289f03bf17949d3e6226de259e38b8d8d7bbf200d68231f5989ee078e\" returns successfully" Mar 14 00:45:03.978391 systemd[1]: run-containerd-runc-k8s.io-d17eeee1bd68bad3877a5fb629fe8660a63675e490cfe029b24c7ff55a2beddb-runc.VTIkeq.mount: Deactivated successfully. Mar 14 00:45:05.100808 systemd[1]: Started sshd@13-10.0.0.158:22-10.0.0.1:40824.service - OpenSSH per-connection server daemon (10.0.0.1:40824). Mar 14 00:45:05.192306 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 40824 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:05.193299 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:05.200179 systemd-logind[1436]: New session 14 of user core. Mar 14 00:45:05.205663 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:45:05.342271 sshd[6078]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:05.348234 systemd[1]: sshd@13-10.0.0.158:22-10.0.0.1:40824.service: Deactivated successfully. Mar 14 00:45:05.349940 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:45:05.351403 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:45:05.357797 systemd[1]: Started sshd@14-10.0.0.158:22-10.0.0.1:40830.service - OpenSSH per-connection server daemon (10.0.0.1:40830). Mar 14 00:45:05.358793 systemd-logind[1436]: Removed session 14. Mar 14 00:45:05.385813 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 40830 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:05.387445 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:05.392173 systemd-logind[1436]: New session 15 of user core. Mar 14 00:45:05.399654 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:45:05.653725 sshd[6113]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:05.668230 systemd[1]: sshd@14-10.0.0.158:22-10.0.0.1:40830.service: Deactivated successfully. Mar 14 00:45:05.670137 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:45:05.672207 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:45:05.684895 systemd[1]: Started sshd@15-10.0.0.158:22-10.0.0.1:40840.service - OpenSSH per-connection server daemon (10.0.0.1:40840). Mar 14 00:45:05.686815 systemd-logind[1436]: Removed session 15. Mar 14 00:45:05.711447 sshd[6125]: Accepted publickey for core from 10.0.0.1 port 40840 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:05.712986 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:05.717775 systemd-logind[1436]: New session 16 of user core. Mar 14 00:45:05.727692 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:45:06.237654 sshd[6125]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:06.247331 systemd[1]: sshd@15-10.0.0.158:22-10.0.0.1:40840.service: Deactivated successfully. Mar 14 00:45:06.249739 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:45:06.251837 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:45:06.259555 systemd[1]: Started sshd@16-10.0.0.158:22-10.0.0.1:40842.service - OpenSSH per-connection server daemon (10.0.0.1:40842). Mar 14 00:45:06.262367 systemd-logind[1436]: Removed session 16. Mar 14 00:45:06.309139 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 40842 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:06.310891 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:06.315794 systemd-logind[1436]: New session 17 of user core. Mar 14 00:45:06.325678 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:45:06.615812 sshd[6150]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:06.625252 systemd[1]: sshd@16-10.0.0.158:22-10.0.0.1:40842.service: Deactivated successfully. Mar 14 00:45:06.627972 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:45:06.631974 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:45:06.643828 systemd[1]: Started sshd@17-10.0.0.158:22-10.0.0.1:40844.service - OpenSSH per-connection server daemon (10.0.0.1:40844). Mar 14 00:45:06.645320 systemd-logind[1436]: Removed session 17. Mar 14 00:45:06.673595 sshd[6162]: Accepted publickey for core from 10.0.0.1 port 40844 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:06.675174 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:06.680969 systemd-logind[1436]: New session 18 of user core. Mar 14 00:45:06.686726 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:45:06.810481 sshd[6162]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:06.814597 systemd[1]: sshd@17-10.0.0.158:22-10.0.0.1:40844.service: Deactivated successfully. Mar 14 00:45:06.816564 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:45:06.817633 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:45:06.818832 systemd-logind[1436]: Removed session 18. Mar 14 00:45:10.738534 kubelet[2504]: I0314 00:45:10.738413 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:45:11.832723 systemd[1]: Started sshd@18-10.0.0.158:22-10.0.0.1:36884.service - OpenSSH per-connection server daemon (10.0.0.1:36884). Mar 14 00:45:11.908149 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 36884 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:11.909942 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:11.915539 systemd-logind[1436]: New session 19 of user core. Mar 14 00:45:11.921669 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:45:12.053388 sshd[6206]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:12.058017 systemd[1]: sshd@18-10.0.0.158:22-10.0.0.1:36884.service: Deactivated successfully. Mar 14 00:45:12.060098 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:45:12.061135 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:45:12.062817 systemd-logind[1436]: Removed session 19. Mar 14 00:45:14.741397 kubelet[2504]: E0314 00:45:14.741356 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:45:17.069950 systemd[1]: Started sshd@19-10.0.0.158:22-10.0.0.1:36886.service - OpenSSH per-connection server daemon (10.0.0.1:36886). Mar 14 00:45:17.170667 sshd[6250]: Accepted publickey for core from 10.0.0.1 port 36886 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:17.172650 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:17.179453 systemd-logind[1436]: New session 20 of user core. Mar 14 00:45:17.184758 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:45:17.392064 sshd[6250]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:17.395970 systemd[1]: sshd@19-10.0.0.158:22-10.0.0.1:36886.service: Deactivated successfully. Mar 14 00:45:17.397953 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:45:17.398870 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:45:17.400191 systemd-logind[1436]: Removed session 20. Mar 14 00:45:22.406406 systemd[1]: Started sshd@20-10.0.0.158:22-10.0.0.1:41428.service - OpenSSH per-connection server daemon (10.0.0.1:41428). Mar 14 00:45:22.441120 sshd[6264]: Accepted publickey for core from 10.0.0.1 port 41428 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:45:22.442649 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:45:22.446850 systemd-logind[1436]: New session 21 of user core. Mar 14 00:45:22.460672 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:45:22.588700 sshd[6264]: pam_unix(sshd:session): session closed for user core Mar 14 00:45:22.592725 systemd[1]: sshd@20-10.0.0.158:22-10.0.0.1:41428.service: Deactivated successfully. Mar 14 00:45:22.595223 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:45:22.596124 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:45:22.597331 systemd-logind[1436]: Removed session 21.