Mar 14 00:19:28.102577 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:19:28.102611 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:19:28.102629 kernel: BIOS-provided physical RAM map: Mar 14 00:19:28.102638 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:19:28.102647 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:19:28.102655 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:19:28.102665 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 14 00:19:28.102675 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 14 00:19:28.102684 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:19:28.102747 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:19:28.102756 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:19:28.102765 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:19:28.102797 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:19:28.102808 kernel: NX (Execute Disable) protection: active Mar 14 00:19:28.102818 kernel: APIC: Static calls initialized Mar 14 00:19:28.102852 kernel: SMBIOS 2.8 present. Mar 14 00:19:28.102862 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 14 00:19:28.102908 kernel: Hypervisor detected: KVM Mar 14 00:19:28.102918 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:19:28.102930 kernel: kvm-clock: using sched offset of 18959033734 cycles Mar 14 00:19:28.102941 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:19:28.102952 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:19:28.102964 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:19:28.102974 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:19:28.102992 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 14 00:19:28.103003 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:19:28.103012 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:19:28.103022 kernel: Using GB pages for direct mapping Mar 14 00:19:28.103032 kernel: ACPI: Early table checksum verification disabled Mar 14 00:19:28.103041 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 14 00:19:28.103051 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:19:28.103060 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:19:28.103070 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:19:28.103084 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 14 00:19:28.103095 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:19:28.103106 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:19:28.103117 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:19:28.103128 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:19:28.103139 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 14 00:19:28.103149 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 14 00:19:28.103166 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 14 00:19:28.103181 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 14 00:19:28.103192 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 14 00:19:28.103203 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 14 00:19:28.103214 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 14 00:19:28.103225 kernel: No NUMA configuration found Mar 14 00:19:28.103236 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 14 00:19:28.103252 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 14 00:19:28.103263 kernel: Zone ranges: Mar 14 00:19:28.103274 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:19:28.103284 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 14 00:19:28.103294 kernel: Normal empty Mar 14 00:19:28.103340 kernel: Movable zone start for each node Mar 14 00:19:28.103350 kernel: Early memory node ranges Mar 14 00:19:28.103360 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:19:28.103370 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 14 00:19:28.103381 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 14 00:19:28.103396 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:19:28.103429 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:19:28.103440 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 14 00:19:28.103450 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:19:28.103460 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:19:28.103470 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:19:28.103480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:19:28.103489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:19:28.103499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:19:28.103514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:19:28.103524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:19:28.103536 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:19:28.103547 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:19:28.103558 kernel: TSC deadline timer available Mar 14 00:19:28.103570 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:19:28.103582 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:19:28.103595 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:19:28.103633 kernel: kvm-guest: setup PV sched yield Mar 14 00:19:28.103650 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:19:28.103660 kernel: Booting paravirtualized kernel on KVM Mar 14 00:19:28.103670 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:19:28.103680 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:19:28.103740 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:19:28.103751 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:19:28.103761 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:19:28.103770 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:19:28.103782 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:19:28.103806 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:19:28.103816 kernel: random: crng init done Mar 14 00:19:28.103826 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:19:28.103836 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:19:28.103847 kernel: Fallback order for Node 0: 0 Mar 14 00:19:28.103857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 14 00:19:28.103927 kernel: Policy zone: DMA32 Mar 14 00:19:28.103943 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:19:28.103960 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 14 00:19:28.103972 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:19:28.104032 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:19:28.104042 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:19:28.104052 kernel: Dynamic Preempt: voluntary Mar 14 00:19:28.104062 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:19:28.104110 kernel: rcu: RCU event tracing is enabled. Mar 14 00:19:28.104122 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:19:28.104133 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:19:28.104150 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:19:28.104160 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:19:28.104176 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:19:28.104187 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:19:28.104222 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:19:28.104233 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:19:28.104243 kernel: Console: colour VGA+ 80x25 Mar 14 00:19:28.104253 kernel: printk: console [ttyS0] enabled Mar 14 00:19:28.104262 kernel: ACPI: Core revision 20230628 Mar 14 00:19:28.104279 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:19:28.104289 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:19:28.104298 kernel: x2apic enabled Mar 14 00:19:28.104309 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:19:28.104319 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:19:28.104329 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:19:28.104340 kernel: kvm-guest: setup PV IPIs Mar 14 00:19:28.104350 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:19:28.104381 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:19:28.104392 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:19:28.104402 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:19:28.104413 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:19:28.104428 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:19:28.104438 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:19:28.104448 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:19:28.104459 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:19:28.104469 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:19:28.104484 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:19:28.104524 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:19:28.104536 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:19:28.104978 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:19:28.104989 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:19:28.105000 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:19:28.105013 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:19:28.105025 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:19:28.105042 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:19:28.105053 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:19:28.105063 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:19:28.105074 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:19:28.105084 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:19:28.105094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:19:28.105105 kernel: landlock: Up and running. Mar 14 00:19:28.105114 kernel: SELinux: Initializing. Mar 14 00:19:28.105125 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:19:28.105139 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:19:28.105149 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:19:28.105160 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:19:28.105171 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:19:28.105182 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:19:28.105193 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:19:28.105205 kernel: signal: max sigframe size: 1776 Mar 14 00:19:28.105247 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:19:28.105263 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:19:28.105280 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:19:28.105294 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:19:28.105306 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:19:28.105319 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:19:28.105361 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:19:28.105375 kernel: smpboot: Max logical packages: 1 Mar 14 00:19:28.105387 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:19:28.105399 kernel: devtmpfs: initialized Mar 14 00:19:28.105410 kernel: x86/mm: Memory block size: 128MB Mar 14 00:19:28.105426 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:19:28.105437 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:19:28.105448 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:19:28.105459 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:19:28.105469 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:19:28.105480 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:19:28.105937 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:19:28.105954 kernel: audit: type=2000 audit(1773447561.747:1): state=initialized audit_enabled=0 res=1 Mar 14 00:19:28.105965 kernel: cpuidle: using governor menu Mar 14 00:19:28.105982 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:19:28.105993 kernel: dca service started, version 1.12.1 Mar 14 00:19:28.106004 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:19:28.106015 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:19:28.106025 kernel: PCI: Using configuration type 1 for base access Mar 14 00:19:28.106036 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:19:28.106047 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:19:28.106058 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:19:28.106068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:19:28.106083 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:19:28.106094 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:19:28.106104 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:19:28.106115 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:19:28.106126 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:19:28.106138 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:19:28.106151 kernel: ACPI: Interpreter enabled Mar 14 00:19:28.106161 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:19:28.106175 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:19:28.106193 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:19:28.106206 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:19:28.106217 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:19:28.106228 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:19:28.106799 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:19:28.107205 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:19:28.107452 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:19:28.107478 kernel: PCI host bridge to bus 0000:00 Mar 14 00:19:28.107827 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:19:28.108076 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:19:28.108402 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:19:28.108633 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:19:28.108925 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:19:28.109124 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 14 00:19:28.109360 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:19:28.109955 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:19:28.110247 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:19:28.110457 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:19:28.110666 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:19:28.110983 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:19:28.111185 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:19:28.111481 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:19:28.111781 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:19:28.112041 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:19:28.112241 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:19:28.112536 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:19:28.112812 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:19:28.114458 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:19:28.115115 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:19:28.115459 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:19:28.115670 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 14 00:19:28.116155 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:19:28.116360 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 14 00:19:28.116602 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:19:28.116996 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:19:28.117234 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:19:28.117552 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:19:28.117815 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 14 00:19:28.119781 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 14 00:19:28.120324 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:19:28.120556 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:19:28.120606 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:19:28.120619 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:19:28.120631 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:19:28.120643 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:19:28.120654 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:19:28.120768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:19:28.120782 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:19:28.120793 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:19:28.120811 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:19:28.120822 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:19:28.120834 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:19:28.120846 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:19:28.120858 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:19:28.120909 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:19:28.120922 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:19:28.120934 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:19:28.120946 kernel: iommu: Default domain type: Translated Mar 14 00:19:28.120965 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:19:28.120977 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:19:28.120989 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:19:28.120999 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:19:28.121009 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 14 00:19:28.121225 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:19:28.121448 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:19:28.121662 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:19:28.121683 kernel: vgaarb: loaded Mar 14 00:19:28.121756 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:19:28.121768 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:19:28.121779 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:19:28.121790 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:19:28.121802 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:19:28.121813 kernel: pnp: PnP ACPI init Mar 14 00:19:28.122182 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:19:28.125249 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:19:28.125329 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:19:28.125483 kernel: NET: Registered PF_INET protocol family Mar 14 00:19:28.125496 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:19:28.125508 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:19:28.125519 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:19:28.125532 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:19:28.125543 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:19:28.125557 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:19:28.125568 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:19:28.125584 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:19:28.125596 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:19:28.125608 kernel: NET: Registered PF_XDP protocol family Mar 14 00:19:28.126586 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:19:28.127073 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:19:28.127403 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:19:28.127579 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:19:28.128044 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:19:28.128238 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 14 00:19:28.128268 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:19:28.128278 kernel: Initialise system trusted keyrings Mar 14 00:19:28.128289 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:19:28.128300 kernel: Key type asymmetric registered Mar 14 00:19:28.128311 kernel: Asymmetric key parser 'x509' registered Mar 14 00:19:28.128321 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:19:28.128332 kernel: io scheduler mq-deadline registered Mar 14 00:19:28.128345 kernel: io scheduler kyber registered Mar 14 00:19:28.128355 kernel: io scheduler bfq registered Mar 14 00:19:28.128370 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:19:28.128382 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:19:28.128394 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:19:28.128405 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:19:28.128416 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:19:28.128427 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:19:28.128438 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:19:28.128449 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:19:28.128460 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:19:28.128478 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:19:28.129526 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:19:28.130089 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:19:28.130408 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:19:26 UTC (1773447566) Mar 14 00:19:28.130587 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:19:28.130603 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:19:28.130615 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:19:28.130627 kernel: Segment Routing with IPv6 Mar 14 00:19:28.130645 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:19:28.130655 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:19:28.130666 kernel: Key type dns_resolver registered Mar 14 00:19:28.130676 kernel: IPI shorthand broadcast: enabled Mar 14 00:19:28.130817 kernel: sched_clock: Marking stable (4218031462, 929059926)->(6503205550, -1356114162) Mar 14 00:19:28.130832 kernel: registered taskstats version 1 Mar 14 00:19:28.130844 kernel: Loading compiled-in X.509 certificates Mar 14 00:19:28.130854 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:19:28.130865 kernel: Key type .fscrypt registered Mar 14 00:19:28.130944 kernel: Key type fscrypt-provisioning registered Mar 14 00:19:28.130956 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:19:28.130970 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:19:28.130980 kernel: ima: No architecture policies found Mar 14 00:19:28.130990 kernel: clk: Disabling unused clocks Mar 14 00:19:28.130999 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:19:28.131009 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:19:28.131020 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:19:28.131030 kernel: Run /init as init process Mar 14 00:19:28.131080 kernel: with arguments: Mar 14 00:19:28.131090 kernel: /init Mar 14 00:19:28.131101 kernel: with environment: Mar 14 00:19:28.131110 kernel: HOME=/ Mar 14 00:19:28.131120 kernel: TERM=linux Mar 14 00:19:28.131133 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:19:28.131146 systemd[1]: Detected virtualization kvm. Mar 14 00:19:28.131161 systemd[1]: Detected architecture x86-64. Mar 14 00:19:28.131171 systemd[1]: Running in initrd. Mar 14 00:19:28.131182 systemd[1]: No hostname configured, using default hostname. Mar 14 00:19:28.131194 systemd[1]: Hostname set to . Mar 14 00:19:28.131207 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:19:28.131218 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:19:28.131229 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:19:28.131241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:19:28.131258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:19:28.131271 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:19:28.131282 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:19:28.131293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:19:28.131306 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:19:28.131317 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:19:28.131328 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:19:28.131345 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:19:28.131357 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:19:28.131368 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:19:28.131379 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:19:28.131409 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:19:28.131425 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:19:28.131436 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:19:28.131451 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:19:28.131463 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:19:28.131475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:19:28.131487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:19:28.131498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:19:28.131510 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:19:28.131521 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:19:28.131532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:19:28.131549 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:19:28.131562 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:19:28.131572 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:19:28.131583 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:19:28.131594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:19:28.131640 systemd-journald[194]: Collecting audit messages is disabled. Mar 14 00:19:28.131672 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:19:28.132502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:19:28.132523 systemd-journald[194]: Journal started Mar 14 00:19:28.132553 systemd-journald[194]: Runtime Journal (/run/log/journal/a6cb03b4315348309d50ea73499b8f75) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:19:28.153044 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:19:28.155128 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:19:28.169939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:19:28.196190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:19:28.245786 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:19:28.269107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:19:28.309373 systemd-modules-load[195]: Inserted module 'overlay' Mar 14 00:19:28.574558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:19:28.574666 kernel: Bridge firewalling registered Mar 14 00:19:28.316052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:19:28.372663 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 14 00:19:28.608552 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:19:28.621162 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:19:28.664319 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:19:28.672914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:19:28.675799 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:19:28.743376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:19:28.771488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:19:28.779001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:19:28.819166 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:19:28.870056 dracut-cmdline[234]: dracut-dracut-053 Mar 14 00:19:28.878377 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:19:28.905039 systemd-resolved[230]: Positive Trust Anchors: Mar 14 00:19:28.905060 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:19:28.905105 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:19:28.994869 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 14 00:19:29.003854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:19:29.043577 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:19:29.151027 kernel: SCSI subsystem initialized Mar 14 00:19:29.167456 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:19:29.194433 kernel: iscsi: registered transport (tcp) Mar 14 00:19:29.256611 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:19:29.257075 kernel: QLogic iSCSI HBA Driver Mar 14 00:19:29.493765 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:19:29.525595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:19:29.642157 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:19:29.642202 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:19:29.645943 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:19:29.747966 kernel: raid6: avx2x4 gen() 21660 MB/s Mar 14 00:19:29.764838 kernel: raid6: avx2x2 gen() 22792 MB/s Mar 14 00:19:29.787302 kernel: raid6: avx2x1 gen() 11518 MB/s Mar 14 00:19:29.787656 kernel: raid6: using algorithm avx2x2 gen() 22792 MB/s Mar 14 00:19:29.810369 kernel: raid6: .... xor() 19768 MB/s, rmw enabled Mar 14 00:19:29.810456 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:19:29.880378 kernel: xor: automatically using best checksumming function avx Mar 14 00:19:30.522065 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:19:30.573581 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:19:30.610454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:19:30.782346 systemd-udevd[417]: Using default interface naming scheme 'v255'. Mar 14 00:19:30.832414 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:19:30.902473 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:19:30.956559 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 14 00:19:31.082535 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:19:31.119021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:19:31.361586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:19:31.408030 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:19:31.518596 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:19:31.543251 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:19:31.570468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:19:31.599052 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:19:31.664479 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:19:31.685869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:19:31.689470 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:19:31.687394 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:19:31.718574 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:19:31.745377 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:19:31.747278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:19:31.779836 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:19:31.863084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:19:31.909350 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:19:31.986818 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:19:31.999179 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:19:32.012413 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:19:32.012499 kernel: GPT:9289727 != 19775487 Mar 14 00:19:32.012523 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:19:32.012542 kernel: GPT:9289727 != 19775487 Mar 14 00:19:32.012559 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:19:32.012579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:19:32.219868 kernel: libata version 3.00 loaded. Mar 14 00:19:32.403428 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (482) Mar 14 00:19:32.404173 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:19:32.436524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:19:32.459940 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Mar 14 00:19:32.538316 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:19:32.565046 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:19:32.594436 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:19:32.641230 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:19:32.641559 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:19:32.641944 kernel: scsi host0: ahci Mar 14 00:19:32.642230 kernel: scsi host1: ahci Mar 14 00:19:32.636774 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:19:32.721395 kernel: scsi host2: ahci Mar 14 00:19:32.723377 kernel: scsi host3: ahci Mar 14 00:19:32.723843 kernel: scsi host4: ahci Mar 14 00:19:32.730749 kernel: scsi host5: ahci Mar 14 00:19:32.732343 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 14 00:19:32.732382 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 14 00:19:32.732404 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 14 00:19:32.732420 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 14 00:19:32.732439 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 14 00:19:32.732454 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 14 00:19:32.732473 kernel: AES CTR mode by8 optimization enabled Mar 14 00:19:32.681242 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:19:32.755493 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:19:32.755761 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:19:32.868152 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:19:32.905525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:19:32.983162 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:19:32.983245 disk-uuid[548]: Primary Header is updated. Mar 14 00:19:32.983245 disk-uuid[548]: Secondary Entries is updated. Mar 14 00:19:32.983245 disk-uuid[548]: Secondary Header is updated. Mar 14 00:19:33.008138 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:19:33.008218 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:19:33.022081 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:19:33.022172 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:19:33.038763 kernel: ata3.00: applying bridge limits Mar 14 00:19:33.038842 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:19:33.035326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:19:33.096434 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:19:33.096470 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:19:33.123176 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:19:33.138884 kernel: ata3.00: configured for UDMA/100 Mar 14 00:19:33.156411 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:19:33.315837 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:19:33.316553 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:19:33.341842 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:19:34.030763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:19:34.033261 disk-uuid[562]: The operation has completed successfully. Mar 14 00:19:34.206580 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:19:34.206829 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:19:34.246256 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:19:34.299617 sh[599]: Success Mar 14 00:19:34.392809 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:19:34.572181 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:19:34.654496 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:19:34.688377 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:19:34.779572 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:19:34.780406 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:19:34.792570 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:19:34.793154 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:19:34.796998 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:19:35.000338 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:19:35.038326 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:19:35.104163 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:19:35.152862 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:19:35.358194 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:19:35.358285 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:19:35.358309 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:19:35.374865 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:19:35.415837 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:19:35.446743 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:19:35.470979 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:19:35.496077 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:19:36.053229 ignition[695]: Ignition 2.19.0 Mar 14 00:19:36.053312 ignition[695]: Stage: fetch-offline Mar 14 00:19:36.055569 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:19:36.053413 ignition[695]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:19:36.053431 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:19:36.201436 kernel: hrtimer: interrupt took 26657066 ns Mar 14 00:19:36.053669 ignition[695]: parsed url from cmdline: "" Mar 14 00:19:36.053679 ignition[695]: no config URL provided Mar 14 00:19:36.053753 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:19:36.053775 ignition[695]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:19:36.053849 ignition[695]: op(1): [started] loading QEMU firmware config module Mar 14 00:19:36.053860 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:19:36.246205 ignition[695]: op(1): [finished] loading QEMU firmware config module Mar 14 00:19:36.272231 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:19:36.351263 systemd-networkd[787]: lo: Link UP Mar 14 00:19:36.351306 systemd-networkd[787]: lo: Gained carrier Mar 14 00:19:36.362581 systemd-networkd[787]: Enumeration completed Mar 14 00:19:36.365430 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:19:36.371642 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:19:36.371684 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:19:36.417523 systemd[1]: Reached target network.target - Network. Mar 14 00:19:36.465809 systemd-networkd[787]: eth0: Link UP Mar 14 00:19:36.465853 systemd-networkd[787]: eth0: Gained carrier Mar 14 00:19:36.465875 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:19:36.537846 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:19:36.673764 ignition[695]: parsing config with SHA512: c57d4539d2adaa9af46dc96ca11122cce4c4715fbe93a737c05b2e5e552d278e4003b1e829e79df47f22d3fcbc6fe8791d8975d341f7bec4ad9bada899727778 Mar 14 00:19:36.685761 unknown[695]: fetched base config from "system" Mar 14 00:19:36.685782 unknown[695]: fetched user config from "qemu" Mar 14 00:19:36.698802 ignition[695]: fetch-offline: fetch-offline passed Mar 14 00:19:36.699242 ignition[695]: Ignition finished successfully Mar 14 00:19:36.706401 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:19:36.711038 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:19:36.731158 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:19:36.778334 ignition[791]: Ignition 2.19.0 Mar 14 00:19:36.778373 ignition[791]: Stage: kargs Mar 14 00:19:36.778660 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:19:36.778679 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:19:36.780081 ignition[791]: kargs: kargs passed Mar 14 00:19:36.780148 ignition[791]: Ignition finished successfully Mar 14 00:19:36.808181 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:19:36.828294 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:19:36.862297 ignition[799]: Ignition 2.19.0 Mar 14 00:19:36.862351 ignition[799]: Stage: disks Mar 14 00:19:36.866868 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:19:36.862600 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:19:36.873285 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:19:36.862620 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:19:36.880481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:19:36.864251 ignition[799]: disks: disks passed Mar 14 00:19:36.888634 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:19:36.864328 ignition[799]: Ignition finished successfully Mar 14 00:19:36.888856 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:19:36.889627 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:19:36.909126 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:19:37.144977 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:19:37.161454 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:19:37.191951 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:19:37.541312 systemd-networkd[787]: eth0: Gained IPv6LL Mar 14 00:19:37.566758 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:19:37.567814 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:19:37.581037 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:19:37.652217 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:19:37.685232 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:19:37.720194 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:19:37.720828 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:19:37.721018 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:19:37.917986 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Mar 14 00:19:37.952851 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:19:37.953011 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:19:37.956635 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:19:37.953535 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:19:38.048358 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:19:38.021654 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:19:38.067264 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:19:38.475077 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:19:38.524673 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:19:38.563968 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:19:38.594507 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:19:38.999012 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:19:39.030079 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:19:39.034217 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:19:39.103451 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:19:39.112471 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:19:39.175301 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:19:39.189990 ignition[930]: INFO : Ignition 2.19.0 Mar 14 00:19:39.189990 ignition[930]: INFO : Stage: mount Mar 14 00:19:39.200050 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:19:39.200050 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:19:39.200050 ignition[930]: INFO : mount: mount passed Mar 14 00:19:39.200050 ignition[930]: INFO : Ignition finished successfully Mar 14 00:19:39.202357 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:19:39.233249 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:19:39.274239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:19:39.317079 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Mar 14 00:19:39.328993 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:19:39.329054 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:19:39.329069 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:19:39.343244 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:19:39.351279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:19:39.435003 ignition[961]: INFO : Ignition 2.19.0 Mar 14 00:19:39.435003 ignition[961]: INFO : Stage: files Mar 14 00:19:39.448330 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:19:39.448330 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:19:39.448330 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:19:39.481945 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:19:39.481945 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:19:39.524305 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:19:39.542073 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:19:39.549490 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:19:39.541662 unknown[961]: wrote ssh authorized keys file for user: core Mar 14 00:19:39.589068 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:19:39.589068 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:19:39.589068 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:19:39.589068 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:19:39.706038 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:19:39.994328 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:19:40.003406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 00:19:40.485996 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:19:44.575428 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:19:44.575428 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 14 00:19:44.592293 ignition[961]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:19:45.174744 ignition[961]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:19:45.204063 ignition[961]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:19:45.204063 ignition[961]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:19:45.204063 ignition[961]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:19:45.262791 ignition[961]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:19:45.262791 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:19:45.262791 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:19:45.262791 ignition[961]: INFO : files: files passed Mar 14 00:19:45.262791 ignition[961]: INFO : Ignition finished successfully Mar 14 00:19:45.215128 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:19:45.294241 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:19:45.318028 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:19:45.366759 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:19:45.371363 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:19:45.398144 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:19:45.420898 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:19:45.420898 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:19:45.535174 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:19:45.452347 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:19:45.483083 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:19:45.587780 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:19:45.839779 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:19:45.840168 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:19:45.845665 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:19:45.877675 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:19:45.885064 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:19:45.911095 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:19:46.078096 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:19:46.162268 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:19:46.228888 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:19:46.258327 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:19:46.269565 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:19:46.273410 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:19:46.273599 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:19:46.281909 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:19:46.299245 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:19:46.299487 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:19:46.299633 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:19:46.299855 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:19:46.300057 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:19:46.300191 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:19:46.300363 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:19:46.300532 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:19:46.689275 ignition[1015]: INFO : Ignition 2.19.0 Mar 14 00:19:46.689275 ignition[1015]: INFO : Stage: umount Mar 14 00:19:46.300749 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:19:46.715170 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:19:46.715170 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:19:46.715170 ignition[1015]: INFO : umount: umount passed Mar 14 00:19:46.715170 ignition[1015]: INFO : Ignition finished successfully Mar 14 00:19:46.300866 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:19:46.301149 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:19:46.301534 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:19:46.301765 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:19:46.301875 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:19:46.309141 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:19:46.312522 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:19:46.312801 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:19:46.313192 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:19:46.313386 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:19:46.313638 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:19:46.313830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:19:46.318885 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:19:46.370535 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:19:46.418327 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:19:46.452520 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:19:46.453170 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:19:46.453625 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:19:46.453813 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:19:46.494939 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:19:46.495530 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:19:46.510827 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:19:46.511307 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:19:46.588865 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:19:46.604180 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:19:46.604458 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:19:46.621079 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:19:46.659182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:19:46.659367 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:19:46.666843 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:19:46.667099 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:19:46.692099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:19:46.692391 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:19:46.709546 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:19:46.709858 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:19:46.719392 systemd[1]: Stopped target network.target - Network. Mar 14 00:19:46.747303 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:19:46.747819 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:19:46.759184 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:19:46.759417 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:19:46.768250 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:19:46.768519 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:19:46.781192 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:19:46.781371 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:19:46.807426 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:19:46.815770 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:19:46.824238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:19:46.849888 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:19:46.850561 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:19:46.854077 systemd-networkd[787]: eth0: DHCPv6 lease lost Mar 14 00:19:46.870824 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:19:46.877175 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:19:46.947648 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:19:46.952550 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:19:46.962838 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:19:46.963001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:19:46.969080 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:19:46.969177 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:19:46.999614 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:19:47.009178 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:19:47.010161 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:19:47.054104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:19:47.054235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:19:47.094275 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:19:47.094413 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:19:47.099298 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:19:47.099383 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:19:47.104945 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:19:47.220059 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:19:47.220346 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:19:47.289457 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:19:47.289658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:19:47.308592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:19:47.308758 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:19:47.350181 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:19:47.350851 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:19:47.372664 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:19:47.372944 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:19:47.393832 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:19:47.393939 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:19:47.420791 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:19:47.420915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:19:47.539573 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:19:47.558988 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:19:47.559144 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:19:47.570528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:19:47.570634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:19:47.613075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:19:47.614423 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:19:47.647564 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:19:47.861046 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:19:47.919506 systemd[1]: Switching root. Mar 14 00:19:47.986568 systemd-journald[194]: Journal stopped Mar 14 00:19:51.728474 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 14 00:19:51.728609 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:19:51.728646 kernel: SELinux: policy capability open_perms=1 Mar 14 00:19:51.728677 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:19:51.728759 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:19:51.728791 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:19:51.728812 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:19:51.728843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:19:51.728863 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:19:51.728882 kernel: audit: type=1403 audit(1773447588.594:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:19:51.728904 systemd[1]: Successfully loaded SELinux policy in 120.798ms. Mar 14 00:19:51.728939 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.557ms. Mar 14 00:19:51.734258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:19:51.734322 systemd[1]: Detected virtualization kvm. Mar 14 00:19:51.734345 systemd[1]: Detected architecture x86-64. Mar 14 00:19:51.734368 systemd[1]: Detected first boot. Mar 14 00:19:51.734390 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:19:51.734411 zram_generator::config[1076]: No configuration found. Mar 14 00:19:51.734443 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:19:51.734525 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:19:51.734666 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:19:51.734750 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:19:51.734784 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:19:51.734806 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:19:51.734828 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:19:51.734850 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:19:51.734870 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:19:51.734891 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:19:51.734912 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:19:51.734933 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:19:51.735544 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:19:51.735566 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:19:51.735585 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:19:51.735607 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:19:51.735628 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:19:51.735647 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:19:51.735667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:19:51.735744 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:19:51.735769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:19:51.735794 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:19:51.735855 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:19:51.735874 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:19:51.735891 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:19:51.735910 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:19:51.735928 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:19:51.735946 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:19:51.735965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:19:51.736040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:19:51.736059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:19:51.736078 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:19:51.736096 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:19:51.736114 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:19:51.736130 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:19:51.736148 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:19:51.736166 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:19:51.736184 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:19:51.736207 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:19:51.736226 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:19:51.736243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:19:51.736260 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:19:51.736277 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:19:51.736344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:19:51.736364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:19:51.736383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:19:51.736400 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:19:51.736424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:19:51.736443 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:19:51.736461 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 14 00:19:51.736480 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 14 00:19:51.736498 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:19:51.736515 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:19:51.736532 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:19:51.736550 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:19:51.736573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:19:51.736592 kernel: loop: module loaded Mar 14 00:19:51.736612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:19:51.736630 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:19:51.736754 systemd-journald[1176]: Collecting audit messages is disabled. Mar 14 00:19:51.736794 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:19:51.736814 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:19:51.736838 systemd-journald[1176]: Journal started Mar 14 00:19:51.736868 systemd-journald[1176]: Runtime Journal (/run/log/journal/a6cb03b4315348309d50ea73499b8f75) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:19:51.765049 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:19:51.775314 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:19:51.791550 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:19:51.799442 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:19:51.810918 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:19:51.820015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:19:51.832148 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:19:51.832485 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:19:51.841635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:19:51.842137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:19:51.853351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:19:51.853834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:19:51.861171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:19:51.869091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:19:51.879553 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:19:51.899123 kernel: fuse: init (API version 7.39) Mar 14 00:19:51.902013 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:19:51.902383 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:19:51.933767 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:19:51.934437 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:19:51.950900 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:19:51.994285 kernel: ACPI: bus type drm_connector registered Mar 14 00:19:51.999026 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:19:52.018918 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:19:52.034917 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:19:52.047308 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:19:52.073087 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:19:52.089758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:19:52.098308 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:19:52.105186 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:19:52.115209 systemd-journald[1176]: Time spent on flushing to /var/log/journal/a6cb03b4315348309d50ea73499b8f75 is 81.588ms for 926 entries. Mar 14 00:19:52.115209 systemd-journald[1176]: System Journal (/var/log/journal/a6cb03b4315348309d50ea73499b8f75) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:19:52.237031 systemd-journald[1176]: Received client request to flush runtime journal. Mar 14 00:19:52.127168 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:19:52.146927 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:19:52.170544 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:19:52.173492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:19:52.185403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:19:52.203252 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:19:52.225501 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:19:52.231910 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:19:52.243549 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:19:52.266754 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:19:52.300042 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:19:52.321684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:19:52.347036 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:19:52.391649 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Mar 14 00:19:52.391774 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Mar 14 00:19:52.409630 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:19:52.448169 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:19:52.585557 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:19:52.623108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:19:52.693589 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Mar 14 00:19:52.693656 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Mar 14 00:19:52.717596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:19:53.531637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:19:53.559101 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:19:53.617051 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Mar 14 00:19:53.695532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:19:53.722061 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:19:53.758674 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:19:53.822140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1244) Mar 14 00:19:53.856450 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 14 00:19:53.927886 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:19:53.972496 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:19:54.059778 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:19:54.069302 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:19:54.076640 systemd-networkd[1251]: lo: Link UP Mar 14 00:19:54.076654 systemd-networkd[1251]: lo: Gained carrier Mar 14 00:19:54.083428 systemd-networkd[1251]: Enumeration completed Mar 14 00:19:54.083762 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:19:54.087092 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:19:54.087299 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:19:54.099468 systemd-networkd[1251]: eth0: Link UP Mar 14 00:19:54.099481 systemd-networkd[1251]: eth0: Gained carrier Mar 14 00:19:54.099509 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:19:54.122321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:19:54.154836 systemd-networkd[1251]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:19:54.193078 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:19:54.257078 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:19:54.258656 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:19:54.259180 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:19:54.257525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:19:59.066361 systemd-networkd[1251]: eth0: Gained IPv6LL Mar 14 00:19:59.529612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:19:59.716444 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:19:59.842267 kernel: kvm_amd: TSC scaling supported Mar 14 00:19:59.860883 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:19:59.860943 kernel: kvm_amd: Nested Paging enabled Mar 14 00:19:59.860989 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:19:59.861048 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:19:59.977381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:20:00.233216 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:20:00.299175 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:20:00.367096 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:20:00.456157 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:20:00.518238 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:20:00.548373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:20:00.585202 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:20:00.640458 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:20:00.702465 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:20:00.712158 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:20:00.722536 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:20:00.727052 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:20:00.732819 systemd[1]: Reached target machines.target - Containers. Mar 14 00:20:00.742944 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:20:00.772304 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:20:00.790310 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:20:00.795851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:20:00.801127 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:20:00.815085 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:20:00.839001 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:20:00.855397 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:20:00.863582 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:20:00.888841 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:20:00.919077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:20:00.925293 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:20:00.994806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:20:01.064328 kernel: loop1: detected capacity change from 0 to 142488 Mar 14 00:20:01.260768 kernel: loop2: detected capacity change from 0 to 228704 Mar 14 00:20:01.500581 kernel: loop3: detected capacity change from 0 to 140768 Mar 14 00:20:01.609279 kernel: loop4: detected capacity change from 0 to 142488 Mar 14 00:20:02.084900 kernel: loop5: detected capacity change from 0 to 228704 Mar 14 00:20:02.182764 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:20:02.186061 (sd-merge)[1315]: Merged extensions into '/usr'. Mar 14 00:20:02.208678 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:20:02.209816 systemd[1]: Reloading... Mar 14 00:20:02.673806 zram_generator::config[1345]: No configuration found. Mar 14 00:20:03.168983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:20:03.304941 systemd[1]: Reloading finished in 1092 ms. Mar 14 00:20:03.317154 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:20:03.339864 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:20:03.345788 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:20:03.394392 systemd[1]: Starting ensure-sysext.service... Mar 14 00:20:03.406830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:20:03.428095 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:20:03.428143 systemd[1]: Reloading... Mar 14 00:20:03.489334 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:20:03.490339 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:20:03.493905 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:20:03.495910 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 14 00:20:03.496504 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 14 00:20:03.512616 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:20:03.512673 systemd-tmpfiles[1387]: Skipping /boot Mar 14 00:20:03.538637 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:20:03.538680 systemd-tmpfiles[1387]: Skipping /boot Mar 14 00:20:03.565997 zram_generator::config[1418]: No configuration found. Mar 14 00:20:03.835765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:20:03.944568 systemd[1]: Reloading finished in 511 ms. Mar 14 00:20:03.972123 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:20:04.007129 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:20:04.032991 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:20:04.045016 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:20:04.064448 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:20:04.099453 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:20:04.112576 augenrules[1480]: No rules Mar 14 00:20:04.122317 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:20:04.148310 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:20:04.164664 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:20:04.195364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:20:04.195656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:20:04.209502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:20:04.229084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:20:04.241537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:20:04.251928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:20:04.257469 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:20:04.266005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:20:04.279245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:20:04.279791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:20:04.286915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:20:04.287406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:20:04.294543 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:20:04.295075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:20:04.310838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:20:04.311318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:20:04.320872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:20:04.329839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:20:04.330345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:20:04.341356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:20:04.348378 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:20:04.355681 systemd-resolved[1475]: Positive Trust Anchors: Mar 14 00:20:04.355790 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:20:04.355842 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:20:04.364957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:20:04.374563 systemd-resolved[1475]: Defaulting to hostname 'linux'. Mar 14 00:20:04.375249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:20:04.387977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:20:04.389526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:20:04.394849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:20:04.401364 systemd[1]: Finished ensure-sysext.service. Mar 14 00:20:04.410848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:20:04.411353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:20:04.422683 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:20:04.423376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:20:04.437533 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:20:04.452409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:20:04.452949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:20:04.467227 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:20:04.467947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:20:04.498343 systemd[1]: Reached target network.target - Network. Mar 14 00:20:04.502926 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:20:04.515518 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:20:04.531912 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:20:04.532130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:20:04.556535 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:20:04.567674 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:20:04.688098 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:20:04.693903 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:20:04.693973 systemd-timesyncd[1522]: Initial clock synchronization to Sat 2026-03-14 00:20:05.006069 UTC. Mar 14 00:20:04.699516 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:20:04.706293 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:20:04.713637 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:20:04.720464 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:20:04.727916 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:20:04.728012 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:20:04.733196 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:20:04.739331 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:20:04.745798 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:20:04.752992 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:20:04.761921 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:20:04.772464 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:20:04.779299 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:20:05.145061 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:20:05.152336 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:20:05.161412 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:20:05.175118 systemd[1]: System is tainted: cgroupsv1 Mar 14 00:20:05.175314 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:20:05.175382 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:20:05.203210 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:20:05.216207 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:20:05.242546 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:20:05.271120 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:20:05.284057 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:20:05.289521 jq[1531]: false Mar 14 00:20:05.290811 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:20:05.306189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:20:05.318894 dbus-daemon[1529]: [system] SELinux support is enabled Mar 14 00:20:05.322139 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:20:05.514148 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:20:05.515188 extend-filesystems[1532]: Found loop3 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found loop4 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found loop5 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found sr0 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found vda Mar 14 00:20:05.515188 extend-filesystems[1532]: Found vda1 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found vda2 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found vda3 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found usr Mar 14 00:20:05.515188 extend-filesystems[1532]: Found vda4 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found vda6 Mar 14 00:20:05.515188 extend-filesystems[1532]: Found vda7 Mar 14 00:20:05.695246 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:20:05.526977 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:20:05.695502 extend-filesystems[1532]: Found vda9 Mar 14 00:20:05.695502 extend-filesystems[1532]: Checking size of /dev/vda9 Mar 14 00:20:05.695502 extend-filesystems[1532]: Resized partition /dev/vda9 Mar 14 00:20:05.787702 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:20:05.611243 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:20:05.788238 extend-filesystems[1545]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:20:05.788238 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:20:05.788238 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:20:05.788238 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:20:05.624999 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:20:05.866355 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Mar 14 00:20:05.701064 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:20:05.711429 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:20:05.718323 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:20:05.745691 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:20:05.880971 update_engine[1565]: I20260314 00:20:05.852592 1565 main.cc:92] Flatcar Update Engine starting Mar 14 00:20:05.880971 update_engine[1565]: I20260314 00:20:05.855073 1565 update_check_scheduler.cc:74] Next update check in 2m29s Mar 14 00:20:05.781322 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:20:05.881653 jq[1568]: true Mar 14 00:20:05.838977 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:20:05.895899 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1562) Mar 14 00:20:05.839627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:20:05.840684 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:20:05.841231 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:20:05.890372 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:20:05.891048 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:20:05.913986 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:20:05.923967 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:20:05.924043 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:20:05.928195 systemd-logind[1559]: New seat seat0. Mar 14 00:20:05.942660 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:20:05.955145 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:20:05.956241 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:20:06.008377 jq[1587]: true Mar 14 00:20:06.032531 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:20:06.131213 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:20:06.134802 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:20:06.195222 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:20:06.202908 tar[1582]: linux-amd64/LICENSE Mar 14 00:20:06.211791 tar[1582]: linux-amd64/helm Mar 14 00:20:06.215217 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:20:06.227417 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:20:06.227897 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:20:06.228358 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:20:06.244714 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:20:06.249060 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:20:06.302949 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:20:06.303655 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:20:06.333103 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:20:06.658161 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:20:06.755253 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:20:07.058952 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:20:07.337516 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:20:07.886196 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:20:08.403310 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:20:08.442011 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:20:08.778158 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:37758.service - OpenSSH per-connection server daemon (10.0.0.1:37758). Mar 14 00:20:08.893540 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:20:08.894309 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:20:08.929285 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:20:09.492533 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:20:09.514361 containerd[1588]: time="2026-03-14T00:20:09.513256150Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:20:09.527396 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:20:09.556315 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:20:09.759549 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:20:10.204487 containerd[1588]: time="2026-03-14T00:20:10.202306008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:20:10.221839 containerd[1588]: time="2026-03-14T00:20:10.221767009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.221978468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.222055696Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.222605090Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.222672517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.222953836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.222981499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.227396217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.227435132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.227457186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:20:10.227757 containerd[1588]: time="2026-03-14T00:20:10.227471441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:20:10.228184 containerd[1588]: time="2026-03-14T00:20:10.228155874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:20:10.229023 containerd[1588]: time="2026-03-14T00:20:10.228990756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:20:10.232601 containerd[1588]: time="2026-03-14T00:20:10.232561119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:20:10.232752 containerd[1588]: time="2026-03-14T00:20:10.232677876Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:20:10.235795 containerd[1588]: time="2026-03-14T00:20:10.235749328Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:20:10.235990 containerd[1588]: time="2026-03-14T00:20:10.235968830Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:20:10.242818 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 37758 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:10.453627 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:10.503343 containerd[1588]: time="2026-03-14T00:20:10.500388881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:20:10.503343 containerd[1588]: time="2026-03-14T00:20:10.500620964Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:20:10.503343 containerd[1588]: time="2026-03-14T00:20:10.500645204Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:20:10.503343 containerd[1588]: time="2026-03-14T00:20:10.500667635Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:20:10.503343 containerd[1588]: time="2026-03-14T00:20:10.500799608Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:20:10.503343 containerd[1588]: time="2026-03-14T00:20:10.501294338Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:20:10.507475 containerd[1588]: time="2026-03-14T00:20:10.507419186Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.508247395Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.508282939Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.508358020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.508531688Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.508593454Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.508658336Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.508787622Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.509179616Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.509212257Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.509234944Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.509254883Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.509375082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.509403830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.510803 containerd[1588]: time="2026-03-14T00:20:10.509472392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509540259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509567258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509589679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509609362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509627674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509644302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509664925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509680060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509698260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.509781609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.510506490Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.510603287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.511792 containerd[1588]: time="2026-03-14T00:20:10.510657921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.512864 containerd[1588]: time="2026-03-14T00:20:10.512831172Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:20:10.512920 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:20:10.554290 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.531818895Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.531905238Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.531941139Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.531994545Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.532013829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.532036168Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.532119967Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:20:10.570224 containerd[1588]: time="2026-03-14T00:20:10.532143166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.535258664Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.535364823Z" level=info msg="Connect containerd service" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.538097626Z" level=info msg="using legacy CRI server" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.538120803Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.542107126Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.544397144Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.546629841Z" level=info msg="Start subscribing containerd event" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.546889649Z" level=info msg="Start recovering state" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.547148485Z" level=info msg="Start event monitor" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.547199398Z" level=info msg="Start snapshots syncer" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.547249065Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:20:10.577245 containerd[1588]: time="2026-03-14T00:20:10.547279354Z" level=info msg="Start streaming server" Mar 14 00:20:10.599407 systemd-logind[1559]: New session 1 of user core. Mar 14 00:20:10.619155 containerd[1588]: time="2026-03-14T00:20:10.619106154Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:20:10.619551 containerd[1588]: time="2026-03-14T00:20:10.619428636Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:20:10.620524 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:20:10.645868 containerd[1588]: time="2026-03-14T00:20:10.638851410Z" level=info msg="containerd successfully booted in 1.137169s" Mar 14 00:20:11.465254 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:20:11.483142 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:20:11.549852 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:20:12.939786 tar[1582]: linux-amd64/README.md Mar 14 00:20:13.020602 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:20:13.081295 systemd[1668]: Queued start job for default target default.target. Mar 14 00:20:13.083898 systemd[1668]: Created slice app.slice - User Application Slice. Mar 14 00:20:13.084006 systemd[1668]: Reached target paths.target - Paths. Mar 14 00:20:13.084031 systemd[1668]: Reached target timers.target - Timers. Mar 14 00:20:13.110932 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:20:13.466530 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:20:13.466861 systemd[1668]: Reached target sockets.target - Sockets. Mar 14 00:20:13.466955 systemd[1668]: Reached target basic.target - Basic System. Mar 14 00:20:13.467155 systemd[1668]: Reached target default.target - Main User Target. Mar 14 00:20:13.467340 systemd[1668]: Startup finished in 1.414s. Mar 14 00:20:13.472578 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:20:13.547847 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:20:13.662206 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:38510.service - OpenSSH per-connection server daemon (10.0.0.1:38510). Mar 14 00:20:14.186085 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 38510 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:14.189516 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:14.213260 systemd-logind[1559]: New session 2 of user core. Mar 14 00:20:14.225789 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:20:14.684141 sshd[1685]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:14.721281 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:38524.service - OpenSSH per-connection server daemon (10.0.0.1:38524). Mar 14 00:20:14.722246 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:38510.service: Deactivated successfully. Mar 14 00:20:15.009958 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:20:15.073694 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:20:15.112090 systemd-logind[1559]: Removed session 2. Mar 14 00:20:15.221126 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 38524 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:15.226641 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:15.249614 systemd-logind[1559]: New session 3 of user core. Mar 14 00:20:15.326959 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:20:15.802836 sshd[1691]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:15.821868 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:38524.service: Deactivated successfully. Mar 14 00:20:15.837153 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:20:15.838075 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:20:15.844804 systemd-logind[1559]: Removed session 3. Mar 14 00:20:17.851290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:20:17.864828 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:20:17.865939 systemd[1]: Startup finished in 25.987s (kernel) + 29.384s (userspace) = 55.372s. Mar 14 00:20:17.970998 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:20:24.374443 kubelet[1713]: E0314 00:20:24.373514 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:20:24.381163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:20:24.381614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:20:25.887427 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:49134.service - OpenSSH per-connection server daemon (10.0.0.1:49134). Mar 14 00:20:25.956966 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 49134 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:25.969326 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:26.011798 systemd-logind[1559]: New session 4 of user core. Mar 14 00:20:26.042976 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:20:26.177752 sshd[1723]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:26.194429 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:49138.service - OpenSSH per-connection server daemon (10.0.0.1:49138). Mar 14 00:20:26.195295 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:49134.service: Deactivated successfully. Mar 14 00:20:26.204285 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:20:26.207681 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:20:26.219534 systemd-logind[1559]: Removed session 4. Mar 14 00:20:26.344247 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 49138 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:26.351848 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:26.384939 systemd-logind[1559]: New session 5 of user core. Mar 14 00:20:26.395879 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:20:26.460444 sshd[1728]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:26.477202 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:49148.service - OpenSSH per-connection server daemon (10.0.0.1:49148). Mar 14 00:20:26.478167 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:49138.service: Deactivated successfully. Mar 14 00:20:26.497649 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:20:26.498212 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:20:26.504922 systemd-logind[1559]: Removed session 5. Mar 14 00:20:26.542139 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 49148 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:26.544666 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:26.558110 systemd-logind[1559]: New session 6 of user core. Mar 14 00:20:26.567241 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:20:26.654115 sshd[1736]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:26.675630 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:49156.service - OpenSSH per-connection server daemon (10.0.0.1:49156). Mar 14 00:20:26.691682 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:49148.service: Deactivated successfully. Mar 14 00:20:26.709846 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:20:26.726849 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:20:26.737354 systemd-logind[1559]: Removed session 6. Mar 14 00:20:26.851865 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 49156 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:26.855625 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:26.963366 systemd-logind[1559]: New session 7 of user core. Mar 14 00:20:26.988795 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:20:27.398497 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:20:27.401347 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:20:27.470920 sudo[1751]: pam_unix(sudo:session): session closed for user root Mar 14 00:20:27.480325 sshd[1745]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:27.492610 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:49178.service - OpenSSH per-connection server daemon (10.0.0.1:49178). Mar 14 00:20:27.493538 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:49156.service: Deactivated successfully. Mar 14 00:20:27.502410 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:20:27.508907 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:20:27.541037 systemd-logind[1559]: Removed session 7. Mar 14 00:20:27.603265 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 49178 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:27.605838 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:27.642233 systemd-logind[1559]: New session 8 of user core. Mar 14 00:20:27.652346 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:20:27.800640 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:20:27.802333 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:20:27.852042 sudo[1761]: pam_unix(sudo:session): session closed for user root Mar 14 00:20:27.887492 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:20:27.890102 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:20:27.988273 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:20:28.007191 auditctl[1764]: No rules Mar 14 00:20:28.008055 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:20:28.008605 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:20:28.048368 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:20:28.372824 augenrules[1783]: No rules Mar 14 00:20:28.378657 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:20:28.385273 sudo[1760]: pam_unix(sudo:session): session closed for user root Mar 14 00:20:28.407069 sshd[1753]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:28.421381 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:49208.service - OpenSSH per-connection server daemon (10.0.0.1:49208). Mar 14 00:20:28.444538 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:49178.service: Deactivated successfully. Mar 14 00:20:28.449890 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:20:28.451112 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:20:28.456647 systemd-logind[1559]: Removed session 8. Mar 14 00:20:28.609604 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 49208 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:28.615144 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:28.654438 systemd-logind[1559]: New session 9 of user core. Mar 14 00:20:28.663024 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:20:28.747838 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:20:28.748888 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:20:33.830406 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:20:33.839616 (dockerd)[1816]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:20:35.353414 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:20:35.375973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:20:37.825165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:20:37.878985 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:20:38.416816 kubelet[1835]: E0314 00:20:38.415909 1835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:20:38.426442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:20:38.430291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:20:38.771369 dockerd[1816]: time="2026-03-14T00:20:38.769022830Z" level=info msg="Starting up" Mar 14 00:20:39.756218 systemd[1]: var-lib-docker-metacopy\x2dcheck3408938221-merged.mount: Deactivated successfully. Mar 14 00:20:39.804256 dockerd[1816]: time="2026-03-14T00:20:39.804047237Z" level=info msg="Loading containers: start." Mar 14 00:20:40.220318 kernel: Initializing XFRM netlink socket Mar 14 00:20:40.590636 systemd-networkd[1251]: docker0: Link UP Mar 14 00:20:40.640917 dockerd[1816]: time="2026-03-14T00:20:40.637251704Z" level=info msg="Loading containers: done." Mar 14 00:20:40.674939 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck594578665-merged.mount: Deactivated successfully. Mar 14 00:20:40.681754 dockerd[1816]: time="2026-03-14T00:20:40.681488637Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:20:40.681937 dockerd[1816]: time="2026-03-14T00:20:40.681806989Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:20:40.682251 dockerd[1816]: time="2026-03-14T00:20:40.682136319Z" level=info msg="Daemon has completed initialization" Mar 14 00:20:40.770743 dockerd[1816]: time="2026-03-14T00:20:40.770508753Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:20:40.773222 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:20:43.428487 containerd[1588]: time="2026-03-14T00:20:43.428221123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:20:45.328614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515762397.mount: Deactivated successfully. Mar 14 00:20:48.581654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:20:48.632526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:20:50.213865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:20:50.261438 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:20:51.319385 kubelet[2054]: E0314 00:20:51.319137 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:20:51.343772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:20:51.345817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:20:51.514046 update_engine[1565]: I20260314 00:20:51.510622 1565 update_attempter.cc:509] Updating boot flags... Mar 14 00:20:52.014045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2072) Mar 14 00:20:52.374568 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2072) Mar 14 00:20:58.005071 containerd[1588]: time="2026-03-14T00:20:58.004777128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:20:58.009382 containerd[1588]: time="2026-03-14T00:20:58.008196131Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 14 00:20:58.011488 containerd[1588]: time="2026-03-14T00:20:58.011450953Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:20:58.022110 containerd[1588]: time="2026-03-14T00:20:58.022036782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:20:58.026839 containerd[1588]: time="2026-03-14T00:20:58.026505289Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 14.598159952s" Mar 14 00:20:58.026839 containerd[1588]: time="2026-03-14T00:20:58.026547288Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 14 00:20:58.029828 containerd[1588]: time="2026-03-14T00:20:58.029799608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:21:01.571423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:21:01.589004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:02.871499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:02.933259 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:21:03.606529 kubelet[2094]: E0314 00:21:03.605177 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:21:03.624554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:21:03.625087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:21:06.736746 containerd[1588]: time="2026-03-14T00:21:06.735133096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:06.745941 containerd[1588]: time="2026-03-14T00:21:06.742235346Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 14 00:21:06.749265 containerd[1588]: time="2026-03-14T00:21:06.748917663Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:06.760747 containerd[1588]: time="2026-03-14T00:21:06.760264157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:06.847515 containerd[1588]: time="2026-03-14T00:21:06.846364585Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 8.816372074s" Mar 14 00:21:06.847515 containerd[1588]: time="2026-03-14T00:21:06.846786269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 14 00:21:06.852268 containerd[1588]: time="2026-03-14T00:21:06.851863508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:21:13.252110 containerd[1588]: time="2026-03-14T00:21:13.248654270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:13.257613 containerd[1588]: time="2026-03-14T00:21:13.257524952Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 14 00:21:13.264204 containerd[1588]: time="2026-03-14T00:21:13.263864545Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:13.271770 containerd[1588]: time="2026-03-14T00:21:13.271409651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:13.275079 containerd[1588]: time="2026-03-14T00:21:13.274218647Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 6.422244923s" Mar 14 00:21:13.275079 containerd[1588]: time="2026-03-14T00:21:13.274345622Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 14 00:21:13.284894 containerd[1588]: time="2026-03-14T00:21:13.284295466Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:21:13.816308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:21:13.832663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:14.409205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:14.424751 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:21:14.897087 kubelet[2120]: E0314 00:21:14.896916 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:21:14.918134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:21:14.931157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:21:17.735456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746716739.mount: Deactivated successfully. Mar 14 00:21:21.274669 containerd[1588]: time="2026-03-14T00:21:21.274230180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:21.277889 containerd[1588]: time="2026-03-14T00:21:21.277758838Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 14 00:21:21.280439 containerd[1588]: time="2026-03-14T00:21:21.280387816Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:21.286522 containerd[1588]: time="2026-03-14T00:21:21.286370729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:21.287359 containerd[1588]: time="2026-03-14T00:21:21.287277096Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 8.002886227s" Mar 14 00:21:21.287359 containerd[1588]: time="2026-03-14T00:21:21.287334179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 14 00:21:21.289918 containerd[1588]: time="2026-03-14T00:21:21.289868306Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:21:22.099782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156596246.mount: Deactivated successfully. Mar 14 00:21:25.062054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:21:25.081119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:25.161302 containerd[1588]: time="2026-03-14T00:21:25.160965730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:25.168206 containerd[1588]: time="2026-03-14T00:21:25.165747058Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 14 00:21:25.226422 containerd[1588]: time="2026-03-14T00:21:25.225916563Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:25.301024 containerd[1588]: time="2026-03-14T00:21:25.300002750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:25.302758 containerd[1588]: time="2026-03-14T00:21:25.302569510Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.012571398s" Mar 14 00:21:25.302758 containerd[1588]: time="2026-03-14T00:21:25.302631403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 14 00:21:25.304797 containerd[1588]: time="2026-03-14T00:21:25.304650287Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:21:25.460148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:25.492588 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:21:25.687930 kubelet[2202]: E0314 00:21:25.687827 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:21:25.692476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:21:25.693085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:21:25.960485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941537029.mount: Deactivated successfully. Mar 14 00:21:25.969945 containerd[1588]: time="2026-03-14T00:21:25.969845338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:25.970975 containerd[1588]: time="2026-03-14T00:21:25.970909347Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 14 00:21:25.972603 containerd[1588]: time="2026-03-14T00:21:25.972532745Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:25.979229 containerd[1588]: time="2026-03-14T00:21:25.978655394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:25.980249 containerd[1588]: time="2026-03-14T00:21:25.980175316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 675.406274ms" Mar 14 00:21:25.980249 containerd[1588]: time="2026-03-14T00:21:25.980220365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 14 00:21:25.981726 containerd[1588]: time="2026-03-14T00:21:25.981667550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:21:26.861790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281366574.mount: Deactivated successfully. Mar 14 00:21:28.495177 containerd[1588]: time="2026-03-14T00:21:28.495015228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:28.496787 containerd[1588]: time="2026-03-14T00:21:28.496576635Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 14 00:21:28.498454 containerd[1588]: time="2026-03-14T00:21:28.498369791Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:28.502348 containerd[1588]: time="2026-03-14T00:21:28.502291904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:28.503964 containerd[1588]: time="2026-03-14T00:21:28.503844950Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.522070428s" Mar 14 00:21:28.503964 containerd[1588]: time="2026-03-14T00:21:28.503941700Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 14 00:21:31.890842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:31.912103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:32.012003 systemd[1]: Reloading requested from client PID 2311 ('systemctl') (unit session-9.scope)... Mar 14 00:21:32.012644 systemd[1]: Reloading... Mar 14 00:21:32.224792 zram_generator::config[2350]: No configuration found. Mar 14 00:21:33.132459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:21:34.303856 systemd[1]: Reloading finished in 2289 ms. Mar 14 00:21:35.296000 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:21:35.304335 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:21:35.305869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:35.415901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:48.181496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:48.211462 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:21:48.639029 kubelet[2409]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:21:48.639029 kubelet[2409]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:21:48.639029 kubelet[2409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:21:48.645213 kubelet[2409]: I0314 00:21:48.639895 2409 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:21:50.794078 kubelet[2409]: I0314 00:21:50.793113 2409 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:21:50.794078 kubelet[2409]: I0314 00:21:50.793254 2409 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:21:50.794078 kubelet[2409]: I0314 00:21:50.794168 2409 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:21:50.917818 kubelet[2409]: I0314 00:21:50.912595 2409 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:21:50.919455 kubelet[2409]: E0314 00:21:50.919394 2409 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:21:50.960775 kubelet[2409]: E0314 00:21:50.958628 2409 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:21:50.960775 kubelet[2409]: I0314 00:21:50.958680 2409 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:21:51.020460 kubelet[2409]: I0314 00:21:51.018516 2409 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:21:51.023573 kubelet[2409]: I0314 00:21:51.022204 2409 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:21:51.031351 kubelet[2409]: I0314 00:21:51.025536 2409 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:21:51.032453 kubelet[2409]: I0314 00:21:51.032382 2409 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:21:51.032453 kubelet[2409]: I0314 00:21:51.032436 2409 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:21:51.033623 kubelet[2409]: I0314 00:21:51.033181 2409 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:21:51.579585 kubelet[2409]: I0314 00:21:51.575879 2409 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:21:51.581778 kubelet[2409]: I0314 00:21:51.580146 2409 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:21:51.581778 kubelet[2409]: I0314 00:21:51.581040 2409 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:21:51.581778 kubelet[2409]: I0314 00:21:51.581225 2409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:21:51.598092 kubelet[2409]: E0314 00:21:51.597810 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:21:51.600645 kubelet[2409]: E0314 00:21:51.600521 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:21:51.613893 kubelet[2409]: I0314 00:21:51.613404 2409 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:21:51.615225 kubelet[2409]: I0314 00:21:51.615051 2409 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:21:51.623169 kubelet[2409]: W0314 00:21:51.623053 2409 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:21:51.689310 kubelet[2409]: I0314 00:21:51.689181 2409 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:21:51.689310 kubelet[2409]: I0314 00:21:51.689331 2409 server.go:1289] "Started kubelet" Mar 14 00:21:51.695318 kubelet[2409]: I0314 00:21:51.695020 2409 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:21:51.700765 kubelet[2409]: I0314 00:21:51.697055 2409 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:21:51.700765 kubelet[2409]: I0314 00:21:51.697271 2409 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:21:51.701999 kubelet[2409]: E0314 00:21:51.698988 2409 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8d4c32f74919 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:21:51.689230617 +0000 UTC m=+3.375333999,LastTimestamp:2026-03-14 00:21:51.689230617 +0000 UTC m=+3.375333999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:21:51.703905 kubelet[2409]: I0314 00:21:51.703364 2409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:21:51.707206 kubelet[2409]: I0314 00:21:51.706540 2409 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:21:51.711775 kubelet[2409]: I0314 00:21:51.709130 2409 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:21:51.715785 kubelet[2409]: I0314 00:21:51.715095 2409 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:21:51.715785 kubelet[2409]: I0314 00:21:51.715444 2409 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:21:51.715927 kubelet[2409]: E0314 00:21:51.715900 2409 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:21:51.717816 kubelet[2409]: I0314 00:21:51.716970 2409 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:21:51.717816 kubelet[2409]: E0314 00:21:51.717670 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" Mar 14 00:21:51.718877 kubelet[2409]: E0314 00:21:51.718763 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:21:51.719025 kubelet[2409]: E0314 00:21:51.718997 2409 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:21:51.721936 kubelet[2409]: I0314 00:21:51.721579 2409 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:21:51.722989 kubelet[2409]: I0314 00:21:51.722947 2409 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:21:51.735536 kubelet[2409]: I0314 00:21:51.735492 2409 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:21:51.826754 kubelet[2409]: E0314 00:21:51.823521 2409 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:21:51.887933 kubelet[2409]: I0314 00:21:51.887766 2409 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:21:51.894925 kubelet[2409]: I0314 00:21:51.894487 2409 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:21:51.894925 kubelet[2409]: I0314 00:21:51.894633 2409 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:21:51.894925 kubelet[2409]: I0314 00:21:51.894801 2409 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:21:51.894925 kubelet[2409]: I0314 00:21:51.894861 2409 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:21:51.895144 kubelet[2409]: E0314 00:21:51.894932 2409 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:21:51.896393 kubelet[2409]: E0314 00:21:51.896311 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:21:51.908191 kubelet[2409]: I0314 00:21:51.908157 2409 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:21:51.909102 kubelet[2409]: I0314 00:21:51.908324 2409 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:21:51.909102 kubelet[2409]: I0314 00:21:51.908386 2409 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:21:51.917110 kubelet[2409]: I0314 00:21:51.914354 2409 policy_none.go:49] "None policy: Start" Mar 14 00:21:51.917110 kubelet[2409]: I0314 00:21:51.914984 2409 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:21:51.917110 kubelet[2409]: I0314 00:21:51.916296 2409 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:21:51.919217 kubelet[2409]: E0314 00:21:51.918981 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" Mar 14 00:21:51.924039 kubelet[2409]: E0314 00:21:51.923866 2409 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:21:51.932894 kubelet[2409]: E0314 00:21:51.932777 2409 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:21:51.933277 kubelet[2409]: I0314 00:21:51.933205 2409 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:21:51.933357 kubelet[2409]: I0314 00:21:51.933284 2409 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:21:51.935672 kubelet[2409]: I0314 00:21:51.934924 2409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:21:51.938437 kubelet[2409]: E0314 00:21:51.938055 2409 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:21:51.938619 kubelet[2409]: E0314 00:21:51.938487 2409 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:21:52.014479 kubelet[2409]: E0314 00:21:52.014242 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:21:52.019099 kubelet[2409]: E0314 00:21:52.018445 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:21:52.020657 kubelet[2409]: I0314 00:21:52.020603 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3e5ea31bd78464caf75f78071418619-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3e5ea31bd78464caf75f78071418619\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:21:52.020657 kubelet[2409]: I0314 00:21:52.020667 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3e5ea31bd78464caf75f78071418619-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3e5ea31bd78464caf75f78071418619\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:21:52.020657 kubelet[2409]: I0314 00:21:52.020748 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:21:52.020873 kubelet[2409]: I0314 00:21:52.020772 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:21:52.020873 kubelet[2409]: I0314 00:21:52.020796 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:21:52.020873 kubelet[2409]: I0314 00:21:52.020821 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:21:52.020873 kubelet[2409]: I0314 00:21:52.020843 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3e5ea31bd78464caf75f78071418619-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3e5ea31bd78464caf75f78071418619\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:21:52.020873 kubelet[2409]: I0314 00:21:52.020862 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:21:52.020986 kubelet[2409]: I0314 00:21:52.020886 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:21:52.027954 kubelet[2409]: E0314 00:21:52.025859 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:21:52.039435 kubelet[2409]: I0314 00:21:52.039358 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:21:52.047189 kubelet[2409]: E0314 00:21:52.047066 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Mar 14 00:21:52.250343 kubelet[2409]: I0314 00:21:52.250093 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:21:52.251088 kubelet[2409]: E0314 00:21:52.251036 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Mar 14 00:21:52.317760 kubelet[2409]: E0314 00:21:52.317592 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:52.319770 kubelet[2409]: E0314 00:21:52.319651 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:52.322808 kubelet[2409]: E0314 00:21:52.321986 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" Mar 14 00:21:52.322922 containerd[1588]: time="2026-03-14T00:21:52.322437867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:52.323759 containerd[1588]: time="2026-03-14T00:21:52.322437856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3e5ea31bd78464caf75f78071418619,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:52.327244 kubelet[2409]: E0314 00:21:52.327205 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:52.328824 containerd[1588]: time="2026-03-14T00:21:52.328575194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:52.656769 kubelet[2409]: I0314 00:21:52.656011 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:21:52.656769 kubelet[2409]: E0314 00:21:52.656558 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Mar 14 00:21:52.675625 kubelet[2409]: E0314 00:21:52.675267 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:21:52.717800 kubelet[2409]: E0314 00:21:52.717656 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:21:52.795986 kubelet[2409]: E0314 00:21:52.795136 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:21:53.016939 kubelet[2409]: E0314 00:21:53.015685 2409 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:21:53.511633 kubelet[2409]: E0314 00:21:53.511502 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" Mar 14 00:21:53.521132 kubelet[2409]: E0314 00:21:53.512012 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:21:53.529661 kubelet[2409]: I0314 00:21:53.528976 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:21:53.531530 kubelet[2409]: E0314 00:21:53.531034 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Mar 14 00:21:54.227864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428965471.mount: Deactivated successfully. Mar 14 00:21:54.783756 containerd[1588]: time="2026-03-14T00:21:54.782524336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:54.799155 containerd[1588]: time="2026-03-14T00:21:54.798095495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:21:54.799155 containerd[1588]: time="2026-03-14T00:21:54.798767881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:54.804400 containerd[1588]: time="2026-03-14T00:21:54.803026928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:21:54.807542 containerd[1588]: time="2026-03-14T00:21:54.806263085Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:54.813264 containerd[1588]: time="2026-03-14T00:21:54.811827300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:54.814043 containerd[1588]: time="2026-03-14T00:21:54.813963831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:21:54.822470 containerd[1588]: time="2026-03-14T00:21:54.821757968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:54.825277 containerd[1588]: time="2026-03-14T00:21:54.823827867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.500231184s" Mar 14 00:21:54.830375 containerd[1588]: time="2026-03-14T00:21:54.830301529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.50726083s" Mar 14 00:21:54.834125 containerd[1588]: time="2026-03-14T00:21:54.833954984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.505162204s" Mar 14 00:21:54.864387 kubelet[2409]: E0314 00:21:54.864234 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:21:54.982578 kubelet[2409]: E0314 00:21:54.982495 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:21:55.120114 kubelet[2409]: E0314 00:21:55.119660 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="3.2s" Mar 14 00:21:55.142945 kubelet[2409]: I0314 00:21:55.139467 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:21:55.142945 kubelet[2409]: E0314 00:21:55.140609 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Mar 14 00:21:55.434404 kubelet[2409]: E0314 00:21:55.427578 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:21:55.636655 kubelet[2409]: E0314 00:21:55.636428 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:21:56.215221 containerd[1588]: time="2026-03-14T00:21:56.212510249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:56.215221 containerd[1588]: time="2026-03-14T00:21:56.212828307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:56.215221 containerd[1588]: time="2026-03-14T00:21:56.212857418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:56.215221 containerd[1588]: time="2026-03-14T00:21:56.213342608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:56.218324 containerd[1588]: time="2026-03-14T00:21:56.216605085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:56.218324 containerd[1588]: time="2026-03-14T00:21:56.217098642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:56.218324 containerd[1588]: time="2026-03-14T00:21:56.217125890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:56.218324 containerd[1588]: time="2026-03-14T00:21:56.217295939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:56.429090 containerd[1588]: time="2026-03-14T00:21:56.418208771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:56.429090 containerd[1588]: time="2026-03-14T00:21:56.426439545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:56.429090 containerd[1588]: time="2026-03-14T00:21:56.426653149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:56.463345 containerd[1588]: time="2026-03-14T00:21:56.429268612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:56.891446 containerd[1588]: time="2026-03-14T00:21:56.891140429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3e5ea31bd78464caf75f78071418619,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cb10f59f33b9b434cdcabb2959b2c95f2ce8d89c4c3e92e55f2b54e715a5956\"" Mar 14 00:21:56.894443 kubelet[2409]: E0314 00:21:56.894080 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:56.902531 containerd[1588]: time="2026-03-14T00:21:56.902356313Z" level=info msg="CreateContainer within sandbox \"3cb10f59f33b9b434cdcabb2959b2c95f2ce8d89c4c3e92e55f2b54e715a5956\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:21:56.906110 containerd[1588]: time="2026-03-14T00:21:56.906076851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fbf4f37713cd11cc9bb0dfd141f679d1634c0e01c1acf51041517d803573efe\"" Mar 14 00:21:56.910204 kubelet[2409]: E0314 00:21:56.909670 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:56.916275 containerd[1588]: time="2026-03-14T00:21:56.916227721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"a892b01f1ee381d651826cee52ec1732769277d01215d9711e348ffb05c9ce1a\"" Mar 14 00:21:56.917066 kubelet[2409]: E0314 00:21:56.917028 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:56.921260 containerd[1588]: time="2026-03-14T00:21:56.921193093Z" level=info msg="CreateContainer within sandbox \"6fbf4f37713cd11cc9bb0dfd141f679d1634c0e01c1acf51041517d803573efe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:21:56.932669 containerd[1588]: time="2026-03-14T00:21:56.931121756Z" level=info msg="CreateContainer within sandbox \"a892b01f1ee381d651826cee52ec1732769277d01215d9711e348ffb05c9ce1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:21:56.976544 containerd[1588]: time="2026-03-14T00:21:56.976453458Z" level=info msg="CreateContainer within sandbox \"3cb10f59f33b9b434cdcabb2959b2c95f2ce8d89c4c3e92e55f2b54e715a5956\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"97cbb9cd17e46adf62ea0f5f8aca5072dcb5fa6d0dbe5b0fa8cd9502b7555c36\"" Mar 14 00:21:56.977998 containerd[1588]: time="2026-03-14T00:21:56.977531787Z" level=info msg="StartContainer for \"97cbb9cd17e46adf62ea0f5f8aca5072dcb5fa6d0dbe5b0fa8cd9502b7555c36\"" Mar 14 00:21:56.989886 containerd[1588]: time="2026-03-14T00:21:56.989478434Z" level=info msg="CreateContainer within sandbox \"6fbf4f37713cd11cc9bb0dfd141f679d1634c0e01c1acf51041517d803573efe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71c82b4353c582990a4eca3626758aedf882215a122c23cee3f90fe4d0ca8c1f\"" Mar 14 00:21:57.087656 containerd[1588]: time="2026-03-14T00:21:57.087378318Z" level=info msg="CreateContainer within sandbox \"a892b01f1ee381d651826cee52ec1732769277d01215d9711e348ffb05c9ce1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10f0e8ea94e7b572921a1adb87a066d67ee97be999fd0ea8fed56ff1c0e621d3\"" Mar 14 00:21:57.123956 containerd[1588]: time="2026-03-14T00:21:57.121959064Z" level=info msg="StartContainer for \"71c82b4353c582990a4eca3626758aedf882215a122c23cee3f90fe4d0ca8c1f\"" Mar 14 00:21:57.161476 containerd[1588]: time="2026-03-14T00:21:57.160402419Z" level=info msg="StartContainer for \"10f0e8ea94e7b572921a1adb87a066d67ee97be999fd0ea8fed56ff1c0e621d3\"" Mar 14 00:21:57.233561 systemd[1]: run-containerd-runc-k8s.io-3cb10f59f33b9b434cdcabb2959b2c95f2ce8d89c4c3e92e55f2b54e715a5956-runc.L5yzBS.mount: Deactivated successfully. Mar 14 00:21:57.485932 kubelet[2409]: E0314 00:21:57.483477 2409 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:21:58.493320 kubelet[2409]: E0314 00:21:58.493158 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="6.4s" Mar 14 00:21:58.497284 kubelet[2409]: I0314 00:21:58.497153 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:21:58.497942 kubelet[2409]: E0314 00:21:58.497875 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Mar 14 00:21:58.530750 containerd[1588]: time="2026-03-14T00:21:58.529397090Z" level=info msg="StartContainer for \"10f0e8ea94e7b572921a1adb87a066d67ee97be999fd0ea8fed56ff1c0e621d3\" returns successfully" Mar 14 00:21:58.531442 containerd[1588]: time="2026-03-14T00:21:58.531062858Z" level=info msg="StartContainer for \"97cbb9cd17e46adf62ea0f5f8aca5072dcb5fa6d0dbe5b0fa8cd9502b7555c36\" returns successfully" Mar 14 00:21:58.564806 containerd[1588]: time="2026-03-14T00:21:58.562067137Z" level=info msg="StartContainer for \"71c82b4353c582990a4eca3626758aedf882215a122c23cee3f90fe4d0ca8c1f\" returns successfully" Mar 14 00:21:58.970049 kubelet[2409]: E0314 00:21:58.970002 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:21:58.973267 kubelet[2409]: E0314 00:21:58.971356 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:21:58.975568 kubelet[2409]: E0314 00:21:58.975335 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:58.975568 kubelet[2409]: E0314 00:21:58.975497 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:58.976515 kubelet[2409]: E0314 00:21:58.976293 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:21:58.976515 kubelet[2409]: E0314 00:21:58.976440 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:59.998816 kubelet[2409]: E0314 00:21:59.998174 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:21:59.998816 kubelet[2409]: E0314 00:21:59.998544 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:00.003459 kubelet[2409]: E0314 00:22:00.002958 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:22:00.003459 kubelet[2409]: E0314 00:22:00.003116 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:00.005776 kubelet[2409]: E0314 00:22:00.005468 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:22:00.005776 kubelet[2409]: E0314 00:22:00.005616 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:01.116273 kubelet[2409]: E0314 00:22:01.115217 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:22:01.116273 kubelet[2409]: E0314 00:22:01.116401 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:01.124908 kubelet[2409]: E0314 00:22:01.117148 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:22:01.124908 kubelet[2409]: E0314 00:22:01.117357 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:01.967630 kubelet[2409]: E0314 00:22:01.967590 2409 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:22:02.289612 kubelet[2409]: E0314 00:22:02.285167 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:22:02.294062 kubelet[2409]: E0314 00:22:02.293974 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:03.992076 kubelet[2409]: I0314 00:22:03.991940 2409 apiserver.go:52] "Watching apiserver" Mar 14 00:22:04.020331 kubelet[2409]: I0314 00:22:04.020160 2409 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:22:04.080129 kubelet[2409]: E0314 00:22:04.079892 2409 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189c8d4c32f74919 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:21:51.689230617 +0000 UTC m=+3.375333999,LastTimestamp:2026-03-14 00:21:51.689230617 +0000 UTC m=+3.375333999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:22:04.165803 kubelet[2409]: E0314 00:22:04.165163 2409 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189c8d4c33cfa3b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:21:51.703409587 +0000 UTC m=+3.389512970,LastTimestamp:2026-03-14 00:21:51.703409587 +0000 UTC m=+3.389512970,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:22:04.325779 kubelet[2409]: E0314 00:22:04.324832 2409 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 14 00:22:04.783774 kubelet[2409]: E0314 00:22:04.783514 2409 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 14 00:22:04.900751 kubelet[2409]: I0314 00:22:04.900571 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:22:04.924497 kubelet[2409]: I0314 00:22:04.924340 2409 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:22:04.924497 kubelet[2409]: E0314 00:22:04.924419 2409 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 14 00:22:05.017126 kubelet[2409]: I0314 00:22:05.016817 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:05.052425 kubelet[2409]: I0314 00:22:05.052195 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:22:05.054966 kubelet[2409]: E0314 00:22:05.052985 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:05.068013 kubelet[2409]: I0314 00:22:05.067904 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:22:05.070799 kubelet[2409]: E0314 00:22:05.068361 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:05.084749 kubelet[2409]: E0314 00:22:05.084570 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:05.292238 kubelet[2409]: E0314 00:22:05.291660 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:09.201916 kubelet[2409]: E0314 00:22:09.201550 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:09.288274 kubelet[2409]: I0314 00:22:09.286671 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.286614253 podStartE2EDuration="4.286614253s" podCreationTimestamp="2026-03-14 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:22:09.282896861 +0000 UTC m=+20.969000253" watchObservedRunningTime="2026-03-14 00:22:09.286614253 +0000 UTC m=+20.972717655" Mar 14 00:22:09.525509 kubelet[2409]: I0314 00:22:09.523364 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.523258952 podStartE2EDuration="4.523258952s" podCreationTimestamp="2026-03-14 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:22:09.310650318 +0000 UTC m=+20.996753700" watchObservedRunningTime="2026-03-14 00:22:09.523258952 +0000 UTC m=+21.209362334" Mar 14 00:22:10.232669 kubelet[2409]: E0314 00:22:10.232531 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:10.294093 kubelet[2409]: I0314 00:22:10.293847 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.293827871 podStartE2EDuration="5.293827871s" podCreationTimestamp="2026-03-14 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:22:09.525451393 +0000 UTC m=+21.211554785" watchObservedRunningTime="2026-03-14 00:22:10.293827871 +0000 UTC m=+21.979931253" Mar 14 00:22:12.516044 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-9.scope)... Mar 14 00:22:12.516161 systemd[1]: Reloading... Mar 14 00:22:12.740930 zram_generator::config[2748]: No configuration found. Mar 14 00:22:12.952991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:13.246857 systemd[1]: Reloading finished in 729 ms. Mar 14 00:22:13.336269 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:13.369101 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:22:13.369903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:13.380389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:14.276566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:14.300295 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:22:14.430308 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:22:14.432773 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:22:14.432773 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:22:14.432773 kubelet[2797]: I0314 00:22:14.431505 2797 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:22:14.449210 kubelet[2797]: I0314 00:22:14.447606 2797 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:22:14.449210 kubelet[2797]: I0314 00:22:14.448381 2797 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:22:14.449425 kubelet[2797]: I0314 00:22:14.449346 2797 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:22:14.451921 kubelet[2797]: I0314 00:22:14.451053 2797 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:22:14.478591 kubelet[2797]: I0314 00:22:14.478327 2797 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:22:14.537369 kubelet[2797]: E0314 00:22:14.534347 2797 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:22:14.537369 kubelet[2797]: I0314 00:22:14.534394 2797 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:22:14.577244 kubelet[2797]: I0314 00:22:14.577077 2797 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:22:14.579844 kubelet[2797]: I0314 00:22:14.579597 2797 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:22:14.580590 kubelet[2797]: I0314 00:22:14.580061 2797 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:22:14.580993 kubelet[2797]: I0314 00:22:14.580651 2797 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:22:14.580993 kubelet[2797]: I0314 00:22:14.580675 2797 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:22:14.580993 kubelet[2797]: I0314 00:22:14.580886 2797 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:22:14.581403 kubelet[2797]: I0314 00:22:14.581331 2797 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:22:14.581403 kubelet[2797]: I0314 00:22:14.581381 2797 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:22:14.581518 kubelet[2797]: I0314 00:22:14.581429 2797 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:22:14.583587 kubelet[2797]: I0314 00:22:14.581614 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:22:14.587179 kubelet[2797]: I0314 00:22:14.587110 2797 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:22:14.593034 kubelet[2797]: I0314 00:22:14.593009 2797 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:22:14.599918 kubelet[2797]: I0314 00:22:14.599496 2797 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:22:14.599918 kubelet[2797]: I0314 00:22:14.599568 2797 server.go:1289] "Started kubelet" Mar 14 00:22:14.601177 kubelet[2797]: I0314 00:22:14.601044 2797 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:22:14.605968 kubelet[2797]: I0314 00:22:14.602939 2797 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:22:14.607031 kubelet[2797]: I0314 00:22:14.606936 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:22:14.608423 kubelet[2797]: I0314 00:22:14.607554 2797 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:22:14.617171 kubelet[2797]: I0314 00:22:14.614988 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:22:14.626941 kubelet[2797]: I0314 00:22:14.625553 2797 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:22:14.626941 kubelet[2797]: I0314 00:22:14.623348 2797 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:22:14.628459 kubelet[2797]: I0314 00:22:14.628337 2797 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:22:14.628459 kubelet[2797]: I0314 00:22:14.628390 2797 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:22:14.656323 kubelet[2797]: I0314 00:22:14.642333 2797 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:22:14.661773 kubelet[2797]: I0314 00:22:14.658602 2797 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:22:14.661773 kubelet[2797]: E0314 00:22:14.660114 2797 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:22:14.663796 kubelet[2797]: I0314 00:22:14.663639 2797 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:22:14.670615 kubelet[2797]: I0314 00:22:14.670401 2797 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:22:14.691151 kubelet[2797]: I0314 00:22:14.690910 2797 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:22:14.691151 kubelet[2797]: I0314 00:22:14.691001 2797 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:22:14.691151 kubelet[2797]: I0314 00:22:14.691033 2797 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:22:14.691151 kubelet[2797]: I0314 00:22:14.691047 2797 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:22:14.691151 kubelet[2797]: E0314 00:22:14.691106 2797 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:22:14.794625 kubelet[2797]: E0314 00:22:14.792617 2797 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:22:15.116853 kubelet[2797]: E0314 00:22:15.116200 2797 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:22:15.200068 kubelet[2797]: I0314 00:22:15.199632 2797 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:22:15.200068 kubelet[2797]: I0314 00:22:15.199654 2797 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:22:15.200068 kubelet[2797]: I0314 00:22:15.199755 2797 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:22:15.200539 kubelet[2797]: I0314 00:22:15.200388 2797 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:22:15.200539 kubelet[2797]: I0314 00:22:15.200422 2797 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:22:15.200539 kubelet[2797]: I0314 00:22:15.200486 2797 policy_none.go:49] "None policy: Start" Mar 14 00:22:15.200539 kubelet[2797]: I0314 00:22:15.200501 2797 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:22:15.200539 kubelet[2797]: I0314 00:22:15.200514 2797 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:22:15.200963 kubelet[2797]: I0314 00:22:15.200654 2797 state_mem.go:75] "Updated machine memory state" Mar 14 00:22:15.204060 kubelet[2797]: E0314 00:22:15.203945 2797 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:22:15.206777 kubelet[2797]: I0314 00:22:15.205404 2797 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:22:15.206777 kubelet[2797]: I0314 00:22:15.205429 2797 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:22:15.206777 kubelet[2797]: I0314 00:22:15.206424 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:22:15.213188 kubelet[2797]: E0314 00:22:15.213157 2797 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:22:15.457815 kubelet[2797]: I0314 00:22:15.456800 2797 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:22:15.500122 kubelet[2797]: I0314 00:22:15.499982 2797 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 14 00:22:15.500122 kubelet[2797]: I0314 00:22:15.500126 2797 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:22:15.519917 kubelet[2797]: I0314 00:22:15.519825 2797 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:22:15.521941 kubelet[2797]: I0314 00:22:15.520208 2797 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:15.521941 kubelet[2797]: I0314 00:22:15.521413 2797 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:22:15.595558 kubelet[2797]: I0314 00:22:15.595296 2797 apiserver.go:52] "Watching apiserver" Mar 14 00:22:15.602019 kubelet[2797]: E0314 00:22:15.601943 2797 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:15.602185 kubelet[2797]: E0314 00:22:15.602160 2797 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:22:15.662763 kubelet[2797]: E0314 00:22:15.662321 2797 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:22:15.662763 kubelet[2797]: I0314 00:22:15.662549 2797 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:22:15.686523 kubelet[2797]: I0314 00:22:15.686354 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3e5ea31bd78464caf75f78071418619-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3e5ea31bd78464caf75f78071418619\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:22:15.686523 kubelet[2797]: I0314 00:22:15.686426 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3e5ea31bd78464caf75f78071418619-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3e5ea31bd78464caf75f78071418619\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:22:15.686523 kubelet[2797]: I0314 00:22:15.686448 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:15.686523 kubelet[2797]: I0314 00:22:15.686465 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:15.686523 kubelet[2797]: I0314 00:22:15.686515 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:15.687031 kubelet[2797]: I0314 00:22:15.686533 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:22:15.687031 kubelet[2797]: I0314 00:22:15.686603 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3e5ea31bd78464caf75f78071418619-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3e5ea31bd78464caf75f78071418619\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:22:15.687031 kubelet[2797]: I0314 00:22:15.686619 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:15.687031 kubelet[2797]: I0314 00:22:15.686633 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:22:15.904753 kubelet[2797]: E0314 00:22:15.902652 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:15.904753 kubelet[2797]: E0314 00:22:15.904164 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:16.068953 kubelet[2797]: E0314 00:22:16.048548 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:16.731217 kubelet[2797]: E0314 00:22:16.731062 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:16.733449 kubelet[2797]: E0314 00:22:16.731811 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:16.733449 kubelet[2797]: E0314 00:22:16.732134 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:17.866401 kubelet[2797]: I0314 00:22:17.866155 2797 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:22:17.868292 containerd[1588]: time="2026-03-14T00:22:17.868038951Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:22:17.889349 kubelet[2797]: E0314 00:22:17.869241 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:17.889349 kubelet[2797]: E0314 00:22:17.869264 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:17.941184 kubelet[2797]: I0314 00:22:17.940429 2797 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:22:18.764653 kubelet[2797]: I0314 00:22:18.764498 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02eb29d6-cae7-4970-a602-ec02a46a8313-kube-proxy\") pod \"kube-proxy-xmzcp\" (UID: \"02eb29d6-cae7-4970-a602-ec02a46a8313\") " pod="kube-system/kube-proxy-xmzcp" Mar 14 00:22:18.768615 kubelet[2797]: I0314 00:22:18.768316 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02eb29d6-cae7-4970-a602-ec02a46a8313-lib-modules\") pod \"kube-proxy-xmzcp\" (UID: \"02eb29d6-cae7-4970-a602-ec02a46a8313\") " pod="kube-system/kube-proxy-xmzcp" Mar 14 00:22:18.768615 kubelet[2797]: I0314 00:22:18.768431 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsl98\" (UniqueName: \"kubernetes.io/projected/02eb29d6-cae7-4970-a602-ec02a46a8313-kube-api-access-lsl98\") pod \"kube-proxy-xmzcp\" (UID: \"02eb29d6-cae7-4970-a602-ec02a46a8313\") " pod="kube-system/kube-proxy-xmzcp" Mar 14 00:22:18.768615 kubelet[2797]: I0314 00:22:18.768510 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02eb29d6-cae7-4970-a602-ec02a46a8313-xtables-lock\") pod \"kube-proxy-xmzcp\" (UID: \"02eb29d6-cae7-4970-a602-ec02a46a8313\") " pod="kube-system/kube-proxy-xmzcp" Mar 14 00:22:19.340149 kubelet[2797]: E0314 00:22:19.339473 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:19.368769 containerd[1588]: time="2026-03-14T00:22:19.368510826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmzcp,Uid:02eb29d6-cae7-4970-a602-ec02a46a8313,Namespace:kube-system,Attempt:0,}" Mar 14 00:22:19.788411 containerd[1588]: time="2026-03-14T00:22:19.787153253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:22:19.788411 containerd[1588]: time="2026-03-14T00:22:19.787398945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:22:19.788411 containerd[1588]: time="2026-03-14T00:22:19.787432506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:19.788411 containerd[1588]: time="2026-03-14T00:22:19.787805021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:20.329836 containerd[1588]: time="2026-03-14T00:22:20.329455555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmzcp,Uid:02eb29d6-cae7-4970-a602-ec02a46a8313,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2622158dba48372054b34a3c507950b4d2b3b72e6e1052fe0ba52fe3faeb96b\"" Mar 14 00:22:20.334250 kubelet[2797]: E0314 00:22:20.334067 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:20.527932 containerd[1588]: time="2026-03-14T00:22:20.527513390Z" level=info msg="CreateContainer within sandbox \"a2622158dba48372054b34a3c507950b4d2b3b72e6e1052fe0ba52fe3faeb96b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:22:20.650391 kubelet[2797]: I0314 00:22:20.649369 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/965af18a-66a1-4f0b-a411-e5c7fe098bd6-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-x82gm\" (UID: \"965af18a-66a1-4f0b-a411-e5c7fe098bd6\") " pod="tigera-operator/tigera-operator-6bf85f8dd-x82gm" Mar 14 00:22:20.650391 kubelet[2797]: I0314 00:22:20.649471 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2p2\" (UniqueName: \"kubernetes.io/projected/965af18a-66a1-4f0b-a411-e5c7fe098bd6-kube-api-access-cl2p2\") pod \"tigera-operator-6bf85f8dd-x82gm\" (UID: \"965af18a-66a1-4f0b-a411-e5c7fe098bd6\") " pod="tigera-operator/tigera-operator-6bf85f8dd-x82gm" Mar 14 00:22:20.673147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685265328.mount: Deactivated successfully. Mar 14 00:22:20.687460 containerd[1588]: time="2026-03-14T00:22:20.686283975Z" level=info msg="CreateContainer within sandbox \"a2622158dba48372054b34a3c507950b4d2b3b72e6e1052fe0ba52fe3faeb96b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a05850a4cc50b20245ef1e953ebd209ae294f39ea65ac9746131aba3bf6c90d3\"" Mar 14 00:22:20.688078 containerd[1588]: time="2026-03-14T00:22:20.688003366Z" level=info msg="StartContainer for \"a05850a4cc50b20245ef1e953ebd209ae294f39ea65ac9746131aba3bf6c90d3\"" Mar 14 00:22:20.955976 containerd[1588]: time="2026-03-14T00:22:20.954214224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-x82gm,Uid:965af18a-66a1-4f0b-a411-e5c7fe098bd6,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:22:21.027155 containerd[1588]: time="2026-03-14T00:22:21.026844388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:22:21.027155 containerd[1588]: time="2026-03-14T00:22:21.026931408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:22:21.029357 containerd[1588]: time="2026-03-14T00:22:21.026956605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:21.029357 containerd[1588]: time="2026-03-14T00:22:21.027126238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:21.043085 containerd[1588]: time="2026-03-14T00:22:21.037035406Z" level=info msg="StartContainer for \"a05850a4cc50b20245ef1e953ebd209ae294f39ea65ac9746131aba3bf6c90d3\" returns successfully" Mar 14 00:22:21.628783 containerd[1588]: time="2026-03-14T00:22:21.628533499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-x82gm,Uid:965af18a-66a1-4f0b-a411-e5c7fe098bd6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"459b0290ff9c3e70294c03da2305cd9c00a4eb958a3bb3a832e5fd195c32ab2e\"" Mar 14 00:22:21.635445 containerd[1588]: time="2026-03-14T00:22:21.635092107Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:22:22.183223 kubelet[2797]: E0314 00:22:22.183185 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:22.225809 kubelet[2797]: I0314 00:22:22.222644 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xmzcp" podStartSLOduration=4.222619653 podStartE2EDuration="4.222619653s" podCreationTimestamp="2026-03-14 00:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:22:22.20323849 +0000 UTC m=+7.880206425" watchObservedRunningTime="2026-03-14 00:22:22.222619653 +0000 UTC m=+7.899587599" Mar 14 00:22:22.720356 kubelet[2797]: E0314 00:22:22.720016 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:22.731934 kubelet[2797]: E0314 00:22:22.731827 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:23.035042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739595275.mount: Deactivated successfully. Mar 14 00:22:23.200188 kubelet[2797]: E0314 00:22:23.199859 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:23.200188 kubelet[2797]: E0314 00:22:23.199856 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:23.200188 kubelet[2797]: E0314 00:22:23.199876 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:24.351838 kubelet[2797]: E0314 00:22:24.351597 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:25.215653 kubelet[2797]: E0314 00:22:25.215203 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:27.087865 containerd[1588]: time="2026-03-14T00:22:27.087472246Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:27.091790 containerd[1588]: time="2026-03-14T00:22:27.091645436Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:22:27.094332 containerd[1588]: time="2026-03-14T00:22:27.094283319Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:27.100097 containerd[1588]: time="2026-03-14T00:22:27.100019252Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:27.101797 containerd[1588]: time="2026-03-14T00:22:27.101663362Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.466490828s" Mar 14 00:22:27.101797 containerd[1588]: time="2026-03-14T00:22:27.101782784Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:22:27.131230 containerd[1588]: time="2026-03-14T00:22:27.130378981Z" level=info msg="CreateContainer within sandbox \"459b0290ff9c3e70294c03da2305cd9c00a4eb958a3bb3a832e5fd195c32ab2e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:22:27.227305 containerd[1588]: time="2026-03-14T00:22:27.227139665Z" level=info msg="CreateContainer within sandbox \"459b0290ff9c3e70294c03da2305cd9c00a4eb958a3bb3a832e5fd195c32ab2e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6e9d9096b7700c339143451427da61438e2b18aeab678b9e734f283a303ecf9a\"" Mar 14 00:22:27.231494 containerd[1588]: time="2026-03-14T00:22:27.230986861Z" level=info msg="StartContainer for \"6e9d9096b7700c339143451427da61438e2b18aeab678b9e734f283a303ecf9a\"" Mar 14 00:22:27.624261 containerd[1588]: time="2026-03-14T00:22:27.624168427Z" level=info msg="StartContainer for \"6e9d9096b7700c339143451427da61438e2b18aeab678b9e734f283a303ecf9a\" returns successfully" Mar 14 00:22:28.312045 kubelet[2797]: I0314 00:22:28.311466 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-x82gm" podStartSLOduration=2.834383741 podStartE2EDuration="8.31140499s" podCreationTimestamp="2026-03-14 00:22:20 +0000 UTC" firstStartedPulling="2026-03-14 00:22:21.63304979 +0000 UTC m=+7.310017727" lastFinishedPulling="2026-03-14 00:22:27.11007104 +0000 UTC m=+12.787038976" observedRunningTime="2026-03-14 00:22:28.311221068 +0000 UTC m=+13.988189014" watchObservedRunningTime="2026-03-14 00:22:28.31140499 +0000 UTC m=+13.988372926" Mar 14 00:22:35.529632 update_engine[1565]: I20260314 00:22:35.529055 1565 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 14 00:22:35.529632 update_engine[1565]: I20260314 00:22:35.529131 1565 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 14 00:22:35.533777 update_engine[1565]: I20260314 00:22:35.530882 1565 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 14 00:22:35.535265 update_engine[1565]: I20260314 00:22:35.535164 1565 omaha_request_params.cc:62] Current group set to lts Mar 14 00:22:35.543861 update_engine[1565]: I20260314 00:22:35.541599 1565 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 14 00:22:35.543861 update_engine[1565]: I20260314 00:22:35.541635 1565 update_attempter.cc:643] Scheduling an action processor start. Mar 14 00:22:35.543861 update_engine[1565]: I20260314 00:22:35.541663 1565 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:22:35.543861 update_engine[1565]: I20260314 00:22:35.542587 1565 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 14 00:22:35.543861 update_engine[1565]: I20260314 00:22:35.542903 1565 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:22:35.543861 update_engine[1565]: I20260314 00:22:35.542928 1565 omaha_request_action.cc:272] Request: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: Mar 14 00:22:35.543861 update_engine[1565]: I20260314 00:22:35.542945 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:22:35.551352 locksmithd[1623]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 14 00:22:35.555242 update_engine[1565]: I20260314 00:22:35.555146 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:22:35.556084 update_engine[1565]: I20260314 00:22:35.555850 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:22:35.571006 update_engine[1565]: E20260314 00:22:35.570864 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:22:35.571161 update_engine[1565]: I20260314 00:22:35.571056 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 14 00:22:36.687385 sudo[1796]: pam_unix(sudo:session): session closed for user root Mar 14 00:22:36.713149 sshd[1789]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:36.723422 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:49208.service: Deactivated successfully. Mar 14 00:22:36.732409 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:22:36.734778 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:22:36.739160 systemd-logind[1559]: Removed session 9. Mar 14 00:22:38.935267 kubelet[2797]: I0314 00:22:38.935089 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4acc0e84-8a93-4f38-9f4b-2123cf826ffe-tigera-ca-bundle\") pod \"calico-typha-56945f4d5b-4ftll\" (UID: \"4acc0e84-8a93-4f38-9f4b-2123cf826ffe\") " pod="calico-system/calico-typha-56945f4d5b-4ftll" Mar 14 00:22:38.935267 kubelet[2797]: I0314 00:22:38.935179 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4acc0e84-8a93-4f38-9f4b-2123cf826ffe-typha-certs\") pod \"calico-typha-56945f4d5b-4ftll\" (UID: \"4acc0e84-8a93-4f38-9f4b-2123cf826ffe\") " pod="calico-system/calico-typha-56945f4d5b-4ftll" Mar 14 00:22:38.935267 kubelet[2797]: I0314 00:22:38.935207 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzw9s\" (UniqueName: \"kubernetes.io/projected/4acc0e84-8a93-4f38-9f4b-2123cf826ffe-kube-api-access-wzw9s\") pod \"calico-typha-56945f4d5b-4ftll\" (UID: \"4acc0e84-8a93-4f38-9f4b-2123cf826ffe\") " pod="calico-system/calico-typha-56945f4d5b-4ftll" Mar 14 00:22:39.136552 kubelet[2797]: I0314 00:22:39.136425 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-cni-bin-dir\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.136786 kubelet[2797]: I0314 00:22:39.136559 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-policysync\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.136786 kubelet[2797]: I0314 00:22:39.136611 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-flexvol-driver-host\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.136786 kubelet[2797]: I0314 00:22:39.136637 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-var-run-calico\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.136786 kubelet[2797]: I0314 00:22:39.136663 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-xtables-lock\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.136786 kubelet[2797]: I0314 00:22:39.136757 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-bpffs\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137001 kubelet[2797]: I0314 00:22:39.136788 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-var-lib-calico\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137001 kubelet[2797]: I0314 00:22:39.136809 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5cc67788-0cd0-4910-ab3b-2a11e834a49f-node-certs\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137001 kubelet[2797]: I0314 00:22:39.136833 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-cni-log-dir\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137001 kubelet[2797]: I0314 00:22:39.136851 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-cni-net-dir\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137001 kubelet[2797]: I0314 00:22:39.136874 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-lib-modules\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137285 kubelet[2797]: I0314 00:22:39.136895 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cc67788-0cd0-4910-ab3b-2a11e834a49f-tigera-ca-bundle\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137285 kubelet[2797]: I0314 00:22:39.136932 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-sys-fs\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137285 kubelet[2797]: I0314 00:22:39.136957 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqrp2\" (UniqueName: \"kubernetes.io/projected/5cc67788-0cd0-4910-ab3b-2a11e834a49f-kube-api-access-gqrp2\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.137285 kubelet[2797]: I0314 00:22:39.136978 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5cc67788-0cd0-4910-ab3b-2a11e834a49f-nodeproc\") pod \"calico-node-h46mx\" (UID: \"5cc67788-0cd0-4910-ab3b-2a11e834a49f\") " pod="calico-system/calico-node-h46mx" Mar 14 00:22:39.238096 kubelet[2797]: E0314 00:22:39.237864 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:39.247582 containerd[1588]: time="2026-03-14T00:22:39.246863165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56945f4d5b-4ftll,Uid:4acc0e84-8a93-4f38-9f4b-2123cf826ffe,Namespace:calico-system,Attempt:0,}" Mar 14 00:22:39.252586 kubelet[2797]: E0314 00:22:39.252489 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.252586 kubelet[2797]: W0314 00:22:39.252550 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.252879 kubelet[2797]: E0314 00:22:39.252640 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.256093 kubelet[2797]: E0314 00:22:39.253945 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.256093 kubelet[2797]: W0314 00:22:39.254059 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.256093 kubelet[2797]: E0314 00:22:39.254172 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.267849 kubelet[2797]: E0314 00:22:39.267143 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.267849 kubelet[2797]: W0314 00:22:39.267173 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.267849 kubelet[2797]: E0314 00:22:39.267199 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.287230 kubelet[2797]: E0314 00:22:39.286534 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:39.329961 kubelet[2797]: E0314 00:22:39.329865 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.329961 kubelet[2797]: W0314 00:22:39.329920 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.329961 kubelet[2797]: E0314 00:22:39.329945 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.338097 kubelet[2797]: E0314 00:22:39.337322 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.338097 kubelet[2797]: W0314 00:22:39.337373 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.338097 kubelet[2797]: E0314 00:22:39.337398 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.341076 kubelet[2797]: E0314 00:22:39.340680 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.341076 kubelet[2797]: W0314 00:22:39.340856 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.341076 kubelet[2797]: E0314 00:22:39.340876 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.355503 kubelet[2797]: E0314 00:22:39.341176 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.355503 kubelet[2797]: W0314 00:22:39.341189 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.355503 kubelet[2797]: E0314 00:22:39.341201 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.355503 kubelet[2797]: E0314 00:22:39.342094 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.355503 kubelet[2797]: W0314 00:22:39.342105 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.355503 kubelet[2797]: E0314 00:22:39.342117 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.355503 kubelet[2797]: E0314 00:22:39.343079 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.355503 kubelet[2797]: W0314 00:22:39.343090 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.355503 kubelet[2797]: E0314 00:22:39.343183 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.355980 kubelet[2797]: I0314 00:22:39.344927 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1606a97-9d6b-48a1-9c1e-67441e5ad5ba-kubelet-dir\") pod \"csi-node-driver-4bblz\" (UID: \"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba\") " pod="calico-system/csi-node-driver-4bblz" Mar 14 00:22:39.355980 kubelet[2797]: E0314 00:22:39.345271 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.355980 kubelet[2797]: W0314 00:22:39.345284 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.355980 kubelet[2797]: E0314 00:22:39.345297 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.355980 kubelet[2797]: E0314 00:22:39.346013 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.355980 kubelet[2797]: W0314 00:22:39.346027 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.355980 kubelet[2797]: E0314 00:22:39.346039 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.357963 kubelet[2797]: E0314 00:22:39.357892 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.357963 kubelet[2797]: W0314 00:22:39.357936 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.358084 kubelet[2797]: E0314 00:22:39.357954 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.359643 kubelet[2797]: E0314 00:22:39.359527 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.359643 kubelet[2797]: W0314 00:22:39.359570 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.359643 kubelet[2797]: E0314 00:22:39.359584 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.361281 kubelet[2797]: E0314 00:22:39.361208 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.361281 kubelet[2797]: W0314 00:22:39.361247 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.361281 kubelet[2797]: E0314 00:22:39.361261 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.362243 kubelet[2797]: E0314 00:22:39.362024 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.362243 kubelet[2797]: W0314 00:22:39.362225 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.362243 kubelet[2797]: E0314 00:22:39.362240 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.363314 kubelet[2797]: E0314 00:22:39.363155 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.363314 kubelet[2797]: W0314 00:22:39.363193 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.363314 kubelet[2797]: E0314 00:22:39.363205 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.365665 kubelet[2797]: E0314 00:22:39.365526 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.365665 kubelet[2797]: W0314 00:22:39.365563 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.365665 kubelet[2797]: E0314 00:22:39.365577 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.366324 kubelet[2797]: E0314 00:22:39.366195 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.366324 kubelet[2797]: W0314 00:22:39.366233 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.366324 kubelet[2797]: E0314 00:22:39.366245 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.367025 kubelet[2797]: E0314 00:22:39.366865 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.367025 kubelet[2797]: W0314 00:22:39.366889 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.367025 kubelet[2797]: E0314 00:22:39.366903 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.367803 kubelet[2797]: E0314 00:22:39.367650 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.367803 kubelet[2797]: W0314 00:22:39.367741 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.367803 kubelet[2797]: E0314 00:22:39.367755 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.369189 kubelet[2797]: E0314 00:22:39.369013 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.369417 kubelet[2797]: W0314 00:22:39.369270 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.369642 kubelet[2797]: E0314 00:22:39.369420 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.371633 kubelet[2797]: E0314 00:22:39.371365 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.371633 kubelet[2797]: W0314 00:22:39.371382 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.371633 kubelet[2797]: E0314 00:22:39.371396 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.373081 kubelet[2797]: E0314 00:22:39.373009 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.373081 kubelet[2797]: W0314 00:22:39.373048 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.373081 kubelet[2797]: E0314 00:22:39.373062 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.373819 kubelet[2797]: E0314 00:22:39.373663 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.373819 kubelet[2797]: W0314 00:22:39.373755 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.373819 kubelet[2797]: E0314 00:22:39.373769 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.374439 kubelet[2797]: E0314 00:22:39.374249 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.374439 kubelet[2797]: W0314 00:22:39.374285 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.374439 kubelet[2797]: E0314 00:22:39.374297 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.375026 kubelet[2797]: E0314 00:22:39.374951 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.375026 kubelet[2797]: W0314 00:22:39.374989 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.375026 kubelet[2797]: E0314 00:22:39.375002 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.375340 containerd[1588]: time="2026-03-14T00:22:39.375145389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:22:39.375448 kubelet[2797]: E0314 00:22:39.375386 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.375448 kubelet[2797]: W0314 00:22:39.375424 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.375448 kubelet[2797]: E0314 00:22:39.375437 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.379437 containerd[1588]: time="2026-03-14T00:22:39.379162320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:22:39.379753 containerd[1588]: time="2026-03-14T00:22:39.379642701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:39.380100 containerd[1588]: time="2026-03-14T00:22:39.380066716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:39.424413 containerd[1588]: time="2026-03-14T00:22:39.424287331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h46mx,Uid:5cc67788-0cd0-4910-ab3b-2a11e834a49f,Namespace:calico-system,Attempt:0,}" Mar 14 00:22:39.450771 kubelet[2797]: E0314 00:22:39.449083 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.450771 kubelet[2797]: W0314 00:22:39.449138 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.450771 kubelet[2797]: E0314 00:22:39.449166 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.450771 kubelet[2797]: I0314 00:22:39.449219 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f1606a97-9d6b-48a1-9c1e-67441e5ad5ba-registration-dir\") pod \"csi-node-driver-4bblz\" (UID: \"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba\") " pod="calico-system/csi-node-driver-4bblz" Mar 14 00:22:39.450771 kubelet[2797]: E0314 00:22:39.449633 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.450771 kubelet[2797]: W0314 00:22:39.449650 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.450771 kubelet[2797]: E0314 00:22:39.449665 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.450771 kubelet[2797]: I0314 00:22:39.449783 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlmqz\" (UniqueName: \"kubernetes.io/projected/f1606a97-9d6b-48a1-9c1e-67441e5ad5ba-kube-api-access-mlmqz\") pod \"csi-node-driver-4bblz\" (UID: \"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba\") " pod="calico-system/csi-node-driver-4bblz" Mar 14 00:22:39.452026 kubelet[2797]: E0314 00:22:39.451764 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.452026 kubelet[2797]: W0314 00:22:39.451784 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.452026 kubelet[2797]: E0314 00:22:39.451801 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.453019 kubelet[2797]: E0314 00:22:39.452966 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.453019 kubelet[2797]: W0314 00:22:39.452985 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.453019 kubelet[2797]: E0314 00:22:39.452999 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.455172 kubelet[2797]: E0314 00:22:39.453534 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.455172 kubelet[2797]: W0314 00:22:39.453550 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.455172 kubelet[2797]: E0314 00:22:39.453568 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.455172 kubelet[2797]: E0314 00:22:39.454373 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.455172 kubelet[2797]: W0314 00:22:39.454388 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.455172 kubelet[2797]: E0314 00:22:39.454409 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.455448 kubelet[2797]: E0314 00:22:39.455252 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.455448 kubelet[2797]: W0314 00:22:39.455266 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.455448 kubelet[2797]: E0314 00:22:39.455276 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.456824 kubelet[2797]: E0314 00:22:39.456032 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.456824 kubelet[2797]: W0314 00:22:39.456054 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.456824 kubelet[2797]: E0314 00:22:39.456069 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.456824 kubelet[2797]: I0314 00:22:39.456244 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f1606a97-9d6b-48a1-9c1e-67441e5ad5ba-varrun\") pod \"csi-node-driver-4bblz\" (UID: \"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba\") " pod="calico-system/csi-node-driver-4bblz" Mar 14 00:22:39.456824 kubelet[2797]: E0314 00:22:39.456565 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.456824 kubelet[2797]: W0314 00:22:39.456579 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.456824 kubelet[2797]: E0314 00:22:39.456596 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.457238 kubelet[2797]: E0314 00:22:39.457045 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.457238 kubelet[2797]: W0314 00:22:39.457059 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.457238 kubelet[2797]: E0314 00:22:39.457075 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.457583 kubelet[2797]: E0314 00:22:39.457548 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.457583 kubelet[2797]: W0314 00:22:39.457565 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.457583 kubelet[2797]: E0314 00:22:39.457575 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.460164 kubelet[2797]: E0314 00:22:39.460027 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.460164 kubelet[2797]: W0314 00:22:39.460074 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.460164 kubelet[2797]: E0314 00:22:39.460091 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.462797 kubelet[2797]: E0314 00:22:39.461142 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.462797 kubelet[2797]: W0314 00:22:39.461154 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.462797 kubelet[2797]: E0314 00:22:39.461164 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.462797 kubelet[2797]: E0314 00:22:39.461621 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.462797 kubelet[2797]: W0314 00:22:39.461635 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.462797 kubelet[2797]: E0314 00:22:39.461648 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.462797 kubelet[2797]: I0314 00:22:39.461670 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f1606a97-9d6b-48a1-9c1e-67441e5ad5ba-socket-dir\") pod \"csi-node-driver-4bblz\" (UID: \"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba\") " pod="calico-system/csi-node-driver-4bblz" Mar 14 00:22:39.462797 kubelet[2797]: E0314 00:22:39.462180 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.462797 kubelet[2797]: W0314 00:22:39.462194 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.463129 kubelet[2797]: E0314 00:22:39.462206 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.463129 kubelet[2797]: E0314 00:22:39.462963 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.463129 kubelet[2797]: W0314 00:22:39.462975 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.463129 kubelet[2797]: E0314 00:22:39.462985 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.463559 kubelet[2797]: E0314 00:22:39.463430 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.463559 kubelet[2797]: W0314 00:22:39.463493 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.463559 kubelet[2797]: E0314 00:22:39.463505 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.536346 containerd[1588]: time="2026-03-14T00:22:39.535202700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:22:39.536346 containerd[1588]: time="2026-03-14T00:22:39.535650038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:22:39.536346 containerd[1588]: time="2026-03-14T00:22:39.535684092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:39.536346 containerd[1588]: time="2026-03-14T00:22:39.535947706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:39.553563 containerd[1588]: time="2026-03-14T00:22:39.553359265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56945f4d5b-4ftll,Uid:4acc0e84-8a93-4f38-9f4b-2123cf826ffe,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8042f8104e5a5b595f0ceddc486418b6ef20ffcce2a6b56534f8561b3eab99f\"" Mar 14 00:22:39.554970 kubelet[2797]: E0314 00:22:39.554859 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:39.561771 containerd[1588]: time="2026-03-14T00:22:39.561562847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:22:39.562979 kubelet[2797]: E0314 00:22:39.562920 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.562979 kubelet[2797]: W0314 00:22:39.562973 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.563119 kubelet[2797]: E0314 00:22:39.563081 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.565661 kubelet[2797]: E0314 00:22:39.565581 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.565661 kubelet[2797]: W0314 00:22:39.565628 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.565661 kubelet[2797]: E0314 00:22:39.565650 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.568291 kubelet[2797]: E0314 00:22:39.568002 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.568291 kubelet[2797]: W0314 00:22:39.568186 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.568291 kubelet[2797]: E0314 00:22:39.568207 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.569269 kubelet[2797]: E0314 00:22:39.569232 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.569269 kubelet[2797]: W0314 00:22:39.569248 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.569382 kubelet[2797]: E0314 00:22:39.569268 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.571795 kubelet[2797]: E0314 00:22:39.571572 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.572077 kubelet[2797]: W0314 00:22:39.571683 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.572077 kubelet[2797]: E0314 00:22:39.572000 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.574134 kubelet[2797]: E0314 00:22:39.574074 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.574134 kubelet[2797]: W0314 00:22:39.574121 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.575302 kubelet[2797]: E0314 00:22:39.574144 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.576223 kubelet[2797]: E0314 00:22:39.576196 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.576533 kubelet[2797]: W0314 00:22:39.576303 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.576533 kubelet[2797]: E0314 00:22:39.576324 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.577647 kubelet[2797]: E0314 00:22:39.577406 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.577647 kubelet[2797]: W0314 00:22:39.577423 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.577647 kubelet[2797]: E0314 00:22:39.577439 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.579223 kubelet[2797]: E0314 00:22:39.579202 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.579319 kubelet[2797]: W0314 00:22:39.579301 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.579415 kubelet[2797]: E0314 00:22:39.579396 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.580211 kubelet[2797]: E0314 00:22:39.580192 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.580313 kubelet[2797]: W0314 00:22:39.580297 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.580388 kubelet[2797]: E0314 00:22:39.580374 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.581128 kubelet[2797]: E0314 00:22:39.581110 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.581392 kubelet[2797]: W0314 00:22:39.581215 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.581392 kubelet[2797]: E0314 00:22:39.581234 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.582267 kubelet[2797]: E0314 00:22:39.582071 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.582267 kubelet[2797]: W0314 00:22:39.582086 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.582267 kubelet[2797]: E0314 00:22:39.582098 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.582805 kubelet[2797]: E0314 00:22:39.582573 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.582805 kubelet[2797]: W0314 00:22:39.582589 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.582805 kubelet[2797]: E0314 00:22:39.582600 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.583225 kubelet[2797]: E0314 00:22:39.583208 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.583314 kubelet[2797]: W0314 00:22:39.583298 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.583387 kubelet[2797]: E0314 00:22:39.583373 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.585360 kubelet[2797]: E0314 00:22:39.585300 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.585519 kubelet[2797]: W0314 00:22:39.585493 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.585630 kubelet[2797]: E0314 00:22:39.585610 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.586677 kubelet[2797]: E0314 00:22:39.586511 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.586677 kubelet[2797]: W0314 00:22:39.586530 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.586677 kubelet[2797]: E0314 00:22:39.586546 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.587763 kubelet[2797]: E0314 00:22:39.587535 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.587763 kubelet[2797]: W0314 00:22:39.587556 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.587763 kubelet[2797]: E0314 00:22:39.587573 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.588964 kubelet[2797]: E0314 00:22:39.588911 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.588964 kubelet[2797]: W0314 00:22:39.588930 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.588964 kubelet[2797]: E0314 00:22:39.588946 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.591129 kubelet[2797]: E0314 00:22:39.591025 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.591129 kubelet[2797]: W0314 00:22:39.591084 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.591129 kubelet[2797]: E0314 00:22:39.591119 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.592901 kubelet[2797]: E0314 00:22:39.592853 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.592901 kubelet[2797]: W0314 00:22:39.592893 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.592997 kubelet[2797]: E0314 00:22:39.592919 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.601414 kubelet[2797]: E0314 00:22:39.601249 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:39.601414 kubelet[2797]: W0314 00:22:39.601302 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:39.601414 kubelet[2797]: E0314 00:22:39.601330 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:39.666258 containerd[1588]: time="2026-03-14T00:22:39.666110040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h46mx,Uid:5cc67788-0cd0-4910-ab3b-2a11e834a49f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\"" Mar 14 00:22:40.271246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219188442.mount: Deactivated successfully. Mar 14 00:22:40.693929 kubelet[2797]: E0314 00:22:40.693119 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:41.484249 containerd[1588]: time="2026-03-14T00:22:41.484140848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:41.485811 containerd[1588]: time="2026-03-14T00:22:41.485637439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:22:41.489831 containerd[1588]: time="2026-03-14T00:22:41.489644333Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:41.496163 containerd[1588]: time="2026-03-14T00:22:41.496072927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:41.502367 containerd[1588]: time="2026-03-14T00:22:41.497554749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.935940056s" Mar 14 00:22:41.502367 containerd[1588]: time="2026-03-14T00:22:41.497606215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:22:41.510070 containerd[1588]: time="2026-03-14T00:22:41.509965082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:22:41.537300 containerd[1588]: time="2026-03-14T00:22:41.537007905Z" level=info msg="CreateContainer within sandbox \"c8042f8104e5a5b595f0ceddc486418b6ef20ffcce2a6b56534f8561b3eab99f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:22:41.596767 containerd[1588]: time="2026-03-14T00:22:41.596578641Z" level=info msg="CreateContainer within sandbox \"c8042f8104e5a5b595f0ceddc486418b6ef20ffcce2a6b56534f8561b3eab99f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"548954366185baf1dc3566be0a34c2bfbbae1b0f9f447851caebab4ffc013dea\"" Mar 14 00:22:41.597930 containerd[1588]: time="2026-03-14T00:22:41.597882424Z" level=info msg="StartContainer for \"548954366185baf1dc3566be0a34c2bfbbae1b0f9f447851caebab4ffc013dea\"" Mar 14 00:22:41.799136 containerd[1588]: time="2026-03-14T00:22:41.798890711Z" level=info msg="StartContainer for \"548954366185baf1dc3566be0a34c2bfbbae1b0f9f447851caebab4ffc013dea\" returns successfully" Mar 14 00:22:42.487537 kubelet[2797]: E0314 00:22:42.487362 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:42.519040 kubelet[2797]: I0314 00:22:42.517163 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56945f4d5b-4ftll" podStartSLOduration=2.57242394 podStartE2EDuration="4.517141054s" podCreationTimestamp="2026-03-14 00:22:38 +0000 UTC" firstStartedPulling="2026-03-14 00:22:39.561187203 +0000 UTC m=+25.238155139" lastFinishedPulling="2026-03-14 00:22:41.505904307 +0000 UTC m=+27.182872253" observedRunningTime="2026-03-14 00:22:42.51680354 +0000 UTC m=+28.193771496" watchObservedRunningTime="2026-03-14 00:22:42.517141054 +0000 UTC m=+28.194109010" Mar 14 00:22:42.529668 kubelet[2797]: E0314 00:22:42.529538 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.529668 kubelet[2797]: W0314 00:22:42.529583 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.529668 kubelet[2797]: E0314 00:22:42.529616 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.530457 kubelet[2797]: E0314 00:22:42.530163 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.530457 kubelet[2797]: W0314 00:22:42.530189 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.530457 kubelet[2797]: E0314 00:22:42.530208 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.531236 kubelet[2797]: E0314 00:22:42.530856 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.531236 kubelet[2797]: W0314 00:22:42.530880 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.531236 kubelet[2797]: E0314 00:22:42.530895 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.531482 kubelet[2797]: E0314 00:22:42.531463 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.531581 kubelet[2797]: W0314 00:22:42.531482 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.531581 kubelet[2797]: E0314 00:22:42.531539 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.533143 kubelet[2797]: E0314 00:22:42.532656 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.533143 kubelet[2797]: W0314 00:22:42.532676 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.533143 kubelet[2797]: E0314 00:22:42.532750 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.533471 kubelet[2797]: E0314 00:22:42.533224 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.533471 kubelet[2797]: W0314 00:22:42.533237 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.533471 kubelet[2797]: E0314 00:22:42.533250 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.534340 kubelet[2797]: E0314 00:22:42.534123 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.534340 kubelet[2797]: W0314 00:22:42.534143 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.534340 kubelet[2797]: E0314 00:22:42.534156 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.535624 kubelet[2797]: E0314 00:22:42.535375 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.535624 kubelet[2797]: W0314 00:22:42.535415 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.535624 kubelet[2797]: E0314 00:22:42.535433 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.537273 kubelet[2797]: E0314 00:22:42.537001 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.537273 kubelet[2797]: W0314 00:22:42.537021 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.537273 kubelet[2797]: E0314 00:22:42.537034 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.538115 kubelet[2797]: E0314 00:22:42.537681 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.538115 kubelet[2797]: W0314 00:22:42.537789 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.538115 kubelet[2797]: E0314 00:22:42.537807 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.548428 kubelet[2797]: E0314 00:22:42.548351 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.548428 kubelet[2797]: W0314 00:22:42.548402 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.548589 kubelet[2797]: E0314 00:22:42.548423 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.552917 kubelet[2797]: E0314 00:22:42.552034 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.552917 kubelet[2797]: W0314 00:22:42.552058 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.552917 kubelet[2797]: E0314 00:22:42.552074 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.552917 kubelet[2797]: E0314 00:22:42.552588 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.552917 kubelet[2797]: W0314 00:22:42.552605 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.552917 kubelet[2797]: E0314 00:22:42.552621 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.554489 kubelet[2797]: E0314 00:22:42.554429 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.554489 kubelet[2797]: W0314 00:22:42.554478 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.554653 kubelet[2797]: E0314 00:22:42.554492 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.555539 kubelet[2797]: E0314 00:22:42.555452 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.555539 kubelet[2797]: W0314 00:22:42.555489 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.555539 kubelet[2797]: E0314 00:22:42.555544 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.558952 kubelet[2797]: E0314 00:22:42.558892 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.558952 kubelet[2797]: W0314 00:22:42.558935 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.558952 kubelet[2797]: E0314 00:22:42.558952 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.560050 kubelet[2797]: E0314 00:22:42.560010 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.560050 kubelet[2797]: W0314 00:22:42.560045 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.560163 kubelet[2797]: E0314 00:22:42.560058 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.561286 kubelet[2797]: E0314 00:22:42.560976 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.561286 kubelet[2797]: W0314 00:22:42.561023 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.561286 kubelet[2797]: E0314 00:22:42.561040 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.563202 kubelet[2797]: E0314 00:22:42.563086 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.563202 kubelet[2797]: W0314 00:22:42.563101 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.563202 kubelet[2797]: E0314 00:22:42.563118 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.564032 kubelet[2797]: E0314 00:22:42.563845 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.564032 kubelet[2797]: W0314 00:22:42.563858 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.564032 kubelet[2797]: E0314 00:22:42.563870 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.565325 kubelet[2797]: E0314 00:22:42.565254 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.565625 kubelet[2797]: W0314 00:22:42.565295 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.565625 kubelet[2797]: E0314 00:22:42.565382 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.568417 kubelet[2797]: E0314 00:22:42.566209 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.568417 kubelet[2797]: W0314 00:22:42.566225 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.568417 kubelet[2797]: E0314 00:22:42.566238 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.568417 kubelet[2797]: E0314 00:22:42.567230 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.568417 kubelet[2797]: W0314 00:22:42.567243 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.568417 kubelet[2797]: E0314 00:22:42.567255 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.568417 kubelet[2797]: E0314 00:22:42.567853 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.568417 kubelet[2797]: W0314 00:22:42.567867 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.568417 kubelet[2797]: E0314 00:22:42.567883 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.570292 kubelet[2797]: E0314 00:22:42.569206 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.570292 kubelet[2797]: W0314 00:22:42.569317 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.570292 kubelet[2797]: E0314 00:22:42.569351 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.570755 kubelet[2797]: E0314 00:22:42.570439 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.570755 kubelet[2797]: W0314 00:22:42.570456 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.570755 kubelet[2797]: E0314 00:22:42.570471 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.572271 kubelet[2797]: E0314 00:22:42.571191 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.572271 kubelet[2797]: W0314 00:22:42.571310 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.572271 kubelet[2797]: E0314 00:22:42.571325 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.572271 kubelet[2797]: E0314 00:22:42.572177 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.572271 kubelet[2797]: W0314 00:22:42.572191 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.572271 kubelet[2797]: E0314 00:22:42.572204 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.575276 kubelet[2797]: E0314 00:22:42.575240 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.575276 kubelet[2797]: W0314 00:22:42.575263 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.575276 kubelet[2797]: E0314 00:22:42.575279 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.576162 kubelet[2797]: E0314 00:22:42.576117 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.576162 kubelet[2797]: W0314 00:22:42.576132 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.576333 kubelet[2797]: E0314 00:22:42.576283 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.578216 kubelet[2797]: E0314 00:22:42.576962 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.578216 kubelet[2797]: W0314 00:22:42.577009 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.578216 kubelet[2797]: E0314 00:22:42.577031 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.579236 kubelet[2797]: E0314 00:22:42.579146 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.579236 kubelet[2797]: W0314 00:22:42.579170 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.579236 kubelet[2797]: E0314 00:22:42.579193 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.580795 kubelet[2797]: E0314 00:22:42.580479 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:22:42.580795 kubelet[2797]: W0314 00:22:42.580535 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:22:42.580795 kubelet[2797]: E0314 00:22:42.580553 2797 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:22:42.580999 containerd[1588]: time="2026-03-14T00:22:42.580605291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:42.582358 containerd[1588]: time="2026-03-14T00:22:42.582257435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:22:42.586195 containerd[1588]: time="2026-03-14T00:22:42.585960713Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:42.591391 containerd[1588]: time="2026-03-14T00:22:42.591237847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:42.592644 containerd[1588]: time="2026-03-14T00:22:42.592495350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.082431252s" Mar 14 00:22:42.592644 containerd[1588]: time="2026-03-14T00:22:42.592611999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:22:42.602019 containerd[1588]: time="2026-03-14T00:22:42.601875426Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:22:42.633667 containerd[1588]: time="2026-03-14T00:22:42.633376528Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"409466ae0363c60c697b8f4e5cc5d1fe8bb179e4b2cc1b860bc5f19738fae073\"" Mar 14 00:22:42.636639 containerd[1588]: time="2026-03-14T00:22:42.634644809Z" level=info msg="StartContainer for \"409466ae0363c60c697b8f4e5cc5d1fe8bb179e4b2cc1b860bc5f19738fae073\"" Mar 14 00:22:42.692110 kubelet[2797]: E0314 00:22:42.692070 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:42.809083 containerd[1588]: time="2026-03-14T00:22:42.808996738Z" level=info msg="StartContainer for \"409466ae0363c60c697b8f4e5cc5d1fe8bb179e4b2cc1b860bc5f19738fae073\" returns successfully" Mar 14 00:22:42.908001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-409466ae0363c60c697b8f4e5cc5d1fe8bb179e4b2cc1b860bc5f19738fae073-rootfs.mount: Deactivated successfully. Mar 14 00:22:43.070353 containerd[1588]: time="2026-03-14T00:22:43.069375571Z" level=info msg="shim disconnected" id=409466ae0363c60c697b8f4e5cc5d1fe8bb179e4b2cc1b860bc5f19738fae073 namespace=k8s.io Mar 14 00:22:43.070353 containerd[1588]: time="2026-03-14T00:22:43.069641871Z" level=warning msg="cleaning up after shim disconnected" id=409466ae0363c60c697b8f4e5cc5d1fe8bb179e4b2cc1b860bc5f19738fae073 namespace=k8s.io Mar 14 00:22:43.070353 containerd[1588]: time="2026-03-14T00:22:43.069660036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:22:43.492811 kubelet[2797]: I0314 00:22:43.492481 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:22:43.493584 kubelet[2797]: E0314 00:22:43.493121 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:22:43.496042 containerd[1588]: time="2026-03-14T00:22:43.495980402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:22:44.718657 kubelet[2797]: E0314 00:22:44.714471 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:45.502708 update_engine[1565]: I20260314 00:22:45.502612 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:22:45.503518 update_engine[1565]: I20260314 00:22:45.503221 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:22:45.503518 update_engine[1565]: I20260314 00:22:45.503505 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:22:45.521942 update_engine[1565]: E20260314 00:22:45.521647 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:22:45.521942 update_engine[1565]: I20260314 00:22:45.521835 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 14 00:22:46.697911 kubelet[2797]: E0314 00:22:46.696522 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:48.694298 kubelet[2797]: E0314 00:22:48.693767 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:50.698044 kubelet[2797]: E0314 00:22:50.694872 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:51.402394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208483955.mount: Deactivated successfully. Mar 14 00:22:51.560882 containerd[1588]: time="2026-03-14T00:22:51.560243090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:51.565126 containerd[1588]: time="2026-03-14T00:22:51.565041622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:22:51.569787 containerd[1588]: time="2026-03-14T00:22:51.569635768Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:51.598529 containerd[1588]: time="2026-03-14T00:22:51.575492646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:51.598529 containerd[1588]: time="2026-03-14T00:22:51.576903049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.080845211s" Mar 14 00:22:51.598529 containerd[1588]: time="2026-03-14T00:22:51.598000687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:22:51.620053 containerd[1588]: time="2026-03-14T00:22:51.620002513Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:22:51.710918 containerd[1588]: time="2026-03-14T00:22:51.710601773Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ff22eeeb973eeb9f9425e1965425dbb6cc5f3bfb6d396f2d7b44bc037fe284ed\"" Mar 14 00:22:51.714629 containerd[1588]: time="2026-03-14T00:22:51.714570985Z" level=info msg="StartContainer for \"ff22eeeb973eeb9f9425e1965425dbb6cc5f3bfb6d396f2d7b44bc037fe284ed\"" Mar 14 00:22:52.080466 containerd[1588]: time="2026-03-14T00:22:52.080369637Z" level=info msg="StartContainer for \"ff22eeeb973eeb9f9425e1965425dbb6cc5f3bfb6d396f2d7b44bc037fe284ed\" returns successfully" Mar 14 00:22:52.261996 containerd[1588]: time="2026-03-14T00:22:52.260947243Z" level=info msg="shim disconnected" id=ff22eeeb973eeb9f9425e1965425dbb6cc5f3bfb6d396f2d7b44bc037fe284ed namespace=k8s.io Mar 14 00:22:52.261996 containerd[1588]: time="2026-03-14T00:22:52.261049577Z" level=warning msg="cleaning up after shim disconnected" id=ff22eeeb973eeb9f9425e1965425dbb6cc5f3bfb6d396f2d7b44bc037fe284ed namespace=k8s.io Mar 14 00:22:52.261996 containerd[1588]: time="2026-03-14T00:22:52.261074033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:22:52.407827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff22eeeb973eeb9f9425e1965425dbb6cc5f3bfb6d396f2d7b44bc037fe284ed-rootfs.mount: Deactivated successfully. Mar 14 00:22:52.573816 containerd[1588]: time="2026-03-14T00:22:52.573610874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:22:52.692303 kubelet[2797]: E0314 00:22:52.691560 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:54.725226 kubelet[2797]: E0314 00:22:54.722665 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:55.570119 update_engine[1565]: I20260314 00:22:55.568451 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:22:55.595351 update_engine[1565]: I20260314 00:22:55.577237 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:22:55.595351 update_engine[1565]: I20260314 00:22:55.577993 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:22:55.610068 update_engine[1565]: E20260314 00:22:55.608385 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:22:55.610068 update_engine[1565]: I20260314 00:22:55.608603 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 14 00:22:56.695792 kubelet[2797]: E0314 00:22:56.695559 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:22:59.197844 kubelet[2797]: E0314 00:22:59.197180 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:23:01.143191 kubelet[2797]: E0314 00:23:01.141374 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:23:02.692365 kubelet[2797]: E0314 00:23:02.692272 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:23:03.621465 kubelet[2797]: I0314 00:23:03.620285 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:23:03.623198 kubelet[2797]: E0314 00:23:03.620868 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:04.160370 kubelet[2797]: E0314 00:23:04.160278 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:04.701864 kubelet[2797]: E0314 00:23:04.692137 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:23:04.877573 containerd[1588]: time="2026-03-14T00:23:04.877481816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:04.880342 containerd[1588]: time="2026-03-14T00:23:04.879901051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:23:04.882401 containerd[1588]: time="2026-03-14T00:23:04.882292054Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:04.888961 containerd[1588]: time="2026-03-14T00:23:04.888043914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:04.888961 containerd[1588]: time="2026-03-14T00:23:04.888848401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 12.315063468s" Mar 14 00:23:04.888961 containerd[1588]: time="2026-03-14T00:23:04.888879460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:23:04.904523 containerd[1588]: time="2026-03-14T00:23:04.904348182Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:23:04.967215 containerd[1588]: time="2026-03-14T00:23:04.966955788Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d7462172565d50d4fd67fa2641365871f1c2496b53efd8fa65ba9219b74779a8\"" Mar 14 00:23:04.968809 containerd[1588]: time="2026-03-14T00:23:04.968380782Z" level=info msg="StartContainer for \"d7462172565d50d4fd67fa2641365871f1c2496b53efd8fa65ba9219b74779a8\"" Mar 14 00:23:05.283041 containerd[1588]: time="2026-03-14T00:23:05.282251272Z" level=info msg="StartContainer for \"d7462172565d50d4fd67fa2641365871f1c2496b53efd8fa65ba9219b74779a8\" returns successfully" Mar 14 00:23:05.506270 update_engine[1565]: I20260314 00:23:05.505350 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:23:05.506270 update_engine[1565]: I20260314 00:23:05.505832 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:23:05.507236 update_engine[1565]: I20260314 00:23:05.506309 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:23:05.523636 update_engine[1565]: E20260314 00:23:05.523391 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:23:05.523636 update_engine[1565]: I20260314 00:23:05.523563 1565 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:23:05.523636 update_engine[1565]: I20260314 00:23:05.523586 1565 omaha_request_action.cc:617] Omaha request response: Mar 14 00:23:05.523970 update_engine[1565]: E20260314 00:23:05.523835 1565 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 14 00:23:05.523970 update_engine[1565]: I20260314 00:23:05.523882 1565 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 14 00:23:05.523970 update_engine[1565]: I20260314 00:23:05.523897 1565 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:23:05.523970 update_engine[1565]: I20260314 00:23:05.523910 1565 update_attempter.cc:306] Processing Done. Mar 14 00:23:05.523970 update_engine[1565]: E20260314 00:23:05.523934 1565 update_attempter.cc:619] Update failed. Mar 14 00:23:05.523970 update_engine[1565]: I20260314 00:23:05.523947 1565 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 14 00:23:05.523970 update_engine[1565]: I20260314 00:23:05.523958 1565 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 14 00:23:05.523970 update_engine[1565]: I20260314 00:23:05.523969 1565 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 14 00:23:05.524316 update_engine[1565]: I20260314 00:23:05.524115 1565 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:23:05.524316 update_engine[1565]: I20260314 00:23:05.524202 1565 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:23:05.524316 update_engine[1565]: I20260314 00:23:05.524214 1565 omaha_request_action.cc:272] Request: Mar 14 00:23:05.524316 update_engine[1565]: Mar 14 00:23:05.524316 update_engine[1565]: Mar 14 00:23:05.524316 update_engine[1565]: Mar 14 00:23:05.524316 update_engine[1565]: Mar 14 00:23:05.524316 update_engine[1565]: Mar 14 00:23:05.524316 update_engine[1565]: Mar 14 00:23:05.524316 update_engine[1565]: I20260314 00:23:05.524227 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:23:05.524682 update_engine[1565]: I20260314 00:23:05.524657 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:23:05.527222 update_engine[1565]: I20260314 00:23:05.525058 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:23:05.530910 locksmithd[1623]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 14 00:23:05.552398 update_engine[1565]: E20260314 00:23:05.551470 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:23:05.552398 update_engine[1565]: I20260314 00:23:05.552003 1565 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:23:05.554896 update_engine[1565]: I20260314 00:23:05.554436 1565 omaha_request_action.cc:617] Omaha request response: Mar 14 00:23:05.554896 update_engine[1565]: I20260314 00:23:05.554502 1565 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:23:05.554896 update_engine[1565]: I20260314 00:23:05.554517 1565 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:23:05.554896 update_engine[1565]: I20260314 00:23:05.554529 1565 update_attempter.cc:306] Processing Done. Mar 14 00:23:05.554896 update_engine[1565]: I20260314 00:23:05.554545 1565 update_attempter.cc:310] Error event sent. Mar 14 00:23:05.554896 update_engine[1565]: I20260314 00:23:05.554575 1565 update_check_scheduler.cc:74] Next update check in 42m47s Mar 14 00:23:05.556915 locksmithd[1623]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 14 00:23:06.699429 kubelet[2797]: E0314 00:23:06.698946 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:23:06.765080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7462172565d50d4fd67fa2641365871f1c2496b53efd8fa65ba9219b74779a8-rootfs.mount: Deactivated successfully. Mar 14 00:23:06.783226 containerd[1588]: time="2026-03-14T00:23:06.783059544Z" level=info msg="shim disconnected" id=d7462172565d50d4fd67fa2641365871f1c2496b53efd8fa65ba9219b74779a8 namespace=k8s.io Mar 14 00:23:06.783226 containerd[1588]: time="2026-03-14T00:23:06.783155345Z" level=warning msg="cleaning up after shim disconnected" id=d7462172565d50d4fd67fa2641365871f1c2496b53efd8fa65ba9219b74779a8 namespace=k8s.io Mar 14 00:23:06.783226 containerd[1588]: time="2026-03-14T00:23:06.783213085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:06.784083 kubelet[2797]: I0314 00:23:06.783465 2797 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:23:07.009070 kubelet[2797]: I0314 00:23:07.008302 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/730ac9f6-6ce5-4082-a3ad-c868f729e031-calico-apiserver-certs\") pod \"calico-apiserver-d5d89d5cb-dtq7x\" (UID: \"730ac9f6-6ce5-4082-a3ad-c868f729e031\") " pod="calico-system/calico-apiserver-d5d89d5cb-dtq7x" Mar 14 00:23:07.009070 kubelet[2797]: I0314 00:23:07.008354 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dpz4\" (UniqueName: \"kubernetes.io/projected/67d55006-38d6-455f-9fde-745c7e34d464-kube-api-access-9dpz4\") pod \"coredns-674b8bbfcf-tl9f4\" (UID: \"67d55006-38d6-455f-9fde-745c7e34d464\") " pod="kube-system/coredns-674b8bbfcf-tl9f4" Mar 14 00:23:07.009070 kubelet[2797]: I0314 00:23:07.008375 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-backend-key-pair\") pod \"whisker-66bbbb9b4-8w5mx\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " pod="calico-system/whisker-66bbbb9b4-8w5mx" Mar 14 00:23:07.009070 kubelet[2797]: I0314 00:23:07.008393 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67d55006-38d6-455f-9fde-745c7e34d464-config-volume\") pod \"coredns-674b8bbfcf-tl9f4\" (UID: \"67d55006-38d6-455f-9fde-745c7e34d464\") " pod="kube-system/coredns-674b8bbfcf-tl9f4" Mar 14 00:23:07.009070 kubelet[2797]: I0314 00:23:07.008412 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-nginx-config\") pod \"whisker-66bbbb9b4-8w5mx\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " pod="calico-system/whisker-66bbbb9b4-8w5mx" Mar 14 00:23:07.009449 kubelet[2797]: I0314 00:23:07.008427 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqn9\" (UniqueName: \"kubernetes.io/projected/e7612124-f1e4-47f2-970d-51a3ea494c99-kube-api-access-sdqn9\") pod \"whisker-66bbbb9b4-8w5mx\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " pod="calico-system/whisker-66bbbb9b4-8w5mx" Mar 14 00:23:07.009449 kubelet[2797]: I0314 00:23:07.008445 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbl4p\" (UniqueName: \"kubernetes.io/projected/730ac9f6-6ce5-4082-a3ad-c868f729e031-kube-api-access-jbl4p\") pod \"calico-apiserver-d5d89d5cb-dtq7x\" (UID: \"730ac9f6-6ce5-4082-a3ad-c868f729e031\") " pod="calico-system/calico-apiserver-d5d89d5cb-dtq7x" Mar 14 00:23:07.009449 kubelet[2797]: I0314 00:23:07.008459 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-ca-bundle\") pod \"whisker-66bbbb9b4-8w5mx\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " pod="calico-system/whisker-66bbbb9b4-8w5mx" Mar 14 00:23:07.115882 kubelet[2797]: I0314 00:23:07.114318 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23027385-ff4e-4dfa-87df-bf52afa804b0-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-xr6mf\" (UID: \"23027385-ff4e-4dfa-87df-bf52afa804b0\") " pod="calico-system/goldmane-5b85766d88-xr6mf" Mar 14 00:23:07.115882 kubelet[2797]: I0314 00:23:07.114403 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc25z\" (UniqueName: \"kubernetes.io/projected/99edc0ed-4d50-4c4c-9806-84b2bb9168af-kube-api-access-qc25z\") pod \"calico-kube-controllers-6f7859c787-l2px5\" (UID: \"99edc0ed-4d50-4c4c-9806-84b2bb9168af\") " pod="calico-system/calico-kube-controllers-6f7859c787-l2px5" Mar 14 00:23:07.115882 kubelet[2797]: I0314 00:23:07.114479 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23027385-ff4e-4dfa-87df-bf52afa804b0-config\") pod \"goldmane-5b85766d88-xr6mf\" (UID: \"23027385-ff4e-4dfa-87df-bf52afa804b0\") " pod="calico-system/goldmane-5b85766d88-xr6mf" Mar 14 00:23:07.115882 kubelet[2797]: I0314 00:23:07.114504 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grq9l\" (UniqueName: \"kubernetes.io/projected/b7bb7cd5-cb1e-4575-9842-e88d47314fe4-kube-api-access-grq9l\") pod \"calico-apiserver-d5d89d5cb-76h6t\" (UID: \"b7bb7cd5-cb1e-4575-9842-e88d47314fe4\") " pod="calico-system/calico-apiserver-d5d89d5cb-76h6t" Mar 14 00:23:07.115882 kubelet[2797]: I0314 00:23:07.114529 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99edc0ed-4d50-4c4c-9806-84b2bb9168af-tigera-ca-bundle\") pod \"calico-kube-controllers-6f7859c787-l2px5\" (UID: \"99edc0ed-4d50-4c4c-9806-84b2bb9168af\") " pod="calico-system/calico-kube-controllers-6f7859c787-l2px5" Mar 14 00:23:07.116310 kubelet[2797]: I0314 00:23:07.114593 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvgpj\" (UniqueName: \"kubernetes.io/projected/23027385-ff4e-4dfa-87df-bf52afa804b0-kube-api-access-qvgpj\") pod \"goldmane-5b85766d88-xr6mf\" (UID: \"23027385-ff4e-4dfa-87df-bf52afa804b0\") " pod="calico-system/goldmane-5b85766d88-xr6mf" Mar 14 00:23:07.116310 kubelet[2797]: I0314 00:23:07.114619 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7bb7cd5-cb1e-4575-9842-e88d47314fe4-calico-apiserver-certs\") pod \"calico-apiserver-d5d89d5cb-76h6t\" (UID: \"b7bb7cd5-cb1e-4575-9842-e88d47314fe4\") " pod="calico-system/calico-apiserver-d5d89d5cb-76h6t" Mar 14 00:23:07.116310 kubelet[2797]: I0314 00:23:07.114644 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5170bc80-85e6-4371-b313-d56321f1c8e2-config-volume\") pod \"coredns-674b8bbfcf-rjgfq\" (UID: \"5170bc80-85e6-4371-b313-d56321f1c8e2\") " pod="kube-system/coredns-674b8bbfcf-rjgfq" Mar 14 00:23:07.116310 kubelet[2797]: I0314 00:23:07.114678 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/23027385-ff4e-4dfa-87df-bf52afa804b0-goldmane-key-pair\") pod \"goldmane-5b85766d88-xr6mf\" (UID: \"23027385-ff4e-4dfa-87df-bf52afa804b0\") " pod="calico-system/goldmane-5b85766d88-xr6mf" Mar 14 00:23:07.116310 kubelet[2797]: I0314 00:23:07.114803 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmpx9\" (UniqueName: \"kubernetes.io/projected/5170bc80-85e6-4371-b313-d56321f1c8e2-kube-api-access-zmpx9\") pod \"coredns-674b8bbfcf-rjgfq\" (UID: \"5170bc80-85e6-4371-b313-d56321f1c8e2\") " pod="kube-system/coredns-674b8bbfcf-rjgfq" Mar 14 00:23:07.299888 kubelet[2797]: E0314 00:23:07.297023 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:07.304885 containerd[1588]: time="2026-03-14T00:23:07.300378970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tl9f4,Uid:67d55006-38d6-455f-9fde-745c7e34d464,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:07.324576 containerd[1588]: time="2026-03-14T00:23:07.324518058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66bbbb9b4-8w5mx,Uid:e7612124-f1e4-47f2-970d-51a3ea494c99,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:07.325960 containerd[1588]: time="2026-03-14T00:23:07.325563565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-dtq7x,Uid:730ac9f6-6ce5-4082-a3ad-c868f729e031,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:07.378786 containerd[1588]: time="2026-03-14T00:23:07.376392601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7859c787-l2px5,Uid:99edc0ed-4d50-4c4c-9806-84b2bb9168af,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:07.378959 kubelet[2797]: E0314 00:23:07.378016 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:07.383822 containerd[1588]: time="2026-03-14T00:23:07.383608993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xr6mf,Uid:23027385-ff4e-4dfa-87df-bf52afa804b0,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:07.384245 containerd[1588]: time="2026-03-14T00:23:07.384080489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjgfq,Uid:5170bc80-85e6-4371-b313-d56321f1c8e2,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:07.403066 containerd[1588]: time="2026-03-14T00:23:07.402951355Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:23:07.643877 containerd[1588]: time="2026-03-14T00:23:07.643822283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-76h6t,Uid:b7bb7cd5-cb1e-4575-9842-e88d47314fe4,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:07.712682 containerd[1588]: time="2026-03-14T00:23:07.711596572Z" level=info msg="CreateContainer within sandbox \"0573f0dd813213ceabc708dfb00a432c6119af55c40bc2ce3a5a43a565e804f8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d3abc2af39ef57794aa019d86560ba5981de69fd86751bd1ff5824dfa13d634a\"" Mar 14 00:23:07.729745 containerd[1588]: time="2026-03-14T00:23:07.726592900Z" level=info msg="StartContainer for \"d3abc2af39ef57794aa019d86560ba5981de69fd86751bd1ff5824dfa13d634a\"" Mar 14 00:23:08.171624 containerd[1588]: time="2026-03-14T00:23:08.171536180Z" level=error msg="Failed to destroy network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.172587 containerd[1588]: time="2026-03-14T00:23:08.172143812Z" level=error msg="Failed to destroy network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.175678 containerd[1588]: time="2026-03-14T00:23:08.174824279Z" level=error msg="Failed to destroy network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.175678 containerd[1588]: time="2026-03-14T00:23:08.175648916Z" level=error msg="encountered an error cleaning up failed sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.176009 containerd[1588]: time="2026-03-14T00:23:08.175819660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-dtq7x,Uid:730ac9f6-6ce5-4082-a3ad-c868f729e031,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.176435 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17-shm.mount: Deactivated successfully. Mar 14 00:23:08.179319 containerd[1588]: time="2026-03-14T00:23:08.178621805Z" level=error msg="encountered an error cleaning up failed sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.179319 containerd[1588]: time="2026-03-14T00:23:08.178775517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tl9f4,Uid:67d55006-38d6-455f-9fde-745c7e34d464,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.187028 containerd[1588]: time="2026-03-14T00:23:08.186180586Z" level=error msg="Failed to destroy network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.187614 containerd[1588]: time="2026-03-14T00:23:08.186903469Z" level=error msg="encountered an error cleaning up failed sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.189372 kubelet[2797]: E0314 00:23:08.188805 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.189372 kubelet[2797]: E0314 00:23:08.188836 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.189372 kubelet[2797]: E0314 00:23:08.188922 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5d89d5cb-dtq7x" Mar 14 00:23:08.189372 kubelet[2797]: E0314 00:23:08.188929 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tl9f4" Mar 14 00:23:08.191811 kubelet[2797]: E0314 00:23:08.188996 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tl9f4" Mar 14 00:23:08.191811 kubelet[2797]: E0314 00:23:08.188992 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5d89d5cb-dtq7x" Mar 14 00:23:08.191811 kubelet[2797]: E0314 00:23:08.189078 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tl9f4_kube-system(67d55006-38d6-455f-9fde-745c7e34d464)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tl9f4_kube-system(67d55006-38d6-455f-9fde-745c7e34d464)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tl9f4" podUID="67d55006-38d6-455f-9fde-745c7e34d464" Mar 14 00:23:08.192046 kubelet[2797]: E0314 00:23:08.189261 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5d89d5cb-dtq7x_calico-system(730ac9f6-6ce5-4082-a3ad-c868f729e031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5d89d5cb-dtq7x_calico-system(730ac9f6-6ce5-4082-a3ad-c868f729e031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d5d89d5cb-dtq7x" podUID="730ac9f6-6ce5-4082-a3ad-c868f729e031" Mar 14 00:23:08.192306 containerd[1588]: time="2026-03-14T00:23:08.188486893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66bbbb9b4-8w5mx,Uid:e7612124-f1e4-47f2-970d-51a3ea494c99,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.192821 containerd[1588]: time="2026-03-14T00:23:08.188783565Z" level=error msg="encountered an error cleaning up failed sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.192949 containerd[1588]: time="2026-03-14T00:23:08.192917202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7859c787-l2px5,Uid:99edc0ed-4d50-4c4c-9806-84b2bb9168af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.193462 kubelet[2797]: E0314 00:23:08.193421 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.194385 kubelet[2797]: E0314 00:23:08.193779 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.194385 kubelet[2797]: E0314 00:23:08.193987 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66bbbb9b4-8w5mx" Mar 14 00:23:08.194385 kubelet[2797]: E0314 00:23:08.194015 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66bbbb9b4-8w5mx" Mar 14 00:23:08.194549 kubelet[2797]: E0314 00:23:08.194069 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66bbbb9b4-8w5mx_calico-system(e7612124-f1e4-47f2-970d-51a3ea494c99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66bbbb9b4-8w5mx_calico-system(e7612124-f1e4-47f2-970d-51a3ea494c99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66bbbb9b4-8w5mx" podUID="e7612124-f1e4-47f2-970d-51a3ea494c99" Mar 14 00:23:08.194549 kubelet[2797]: E0314 00:23:08.194280 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f7859c787-l2px5" Mar 14 00:23:08.194549 kubelet[2797]: E0314 00:23:08.194307 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f7859c787-l2px5" Mar 14 00:23:08.194836 kubelet[2797]: E0314 00:23:08.194346 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f7859c787-l2px5_calico-system(99edc0ed-4d50-4c4c-9806-84b2bb9168af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f7859c787-l2px5_calico-system(99edc0ed-4d50-4c4c-9806-84b2bb9168af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f7859c787-l2px5" podUID="99edc0ed-4d50-4c4c-9806-84b2bb9168af" Mar 14 00:23:08.212314 containerd[1588]: time="2026-03-14T00:23:08.211611397Z" level=error msg="Failed to destroy network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.212547 containerd[1588]: time="2026-03-14T00:23:08.212395977Z" level=error msg="encountered an error cleaning up failed sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.212547 containerd[1588]: time="2026-03-14T00:23:08.212459198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-76h6t,Uid:b7bb7cd5-cb1e-4575-9842-e88d47314fe4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.213350 kubelet[2797]: E0314 00:23:08.213293 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.213994 kubelet[2797]: E0314 00:23:08.213773 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5d89d5cb-76h6t" Mar 14 00:23:08.213994 kubelet[2797]: E0314 00:23:08.213892 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5d89d5cb-76h6t" Mar 14 00:23:08.214804 kubelet[2797]: E0314 00:23:08.214764 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5d89d5cb-76h6t_calico-system(b7bb7cd5-cb1e-4575-9842-e88d47314fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5d89d5cb-76h6t_calico-system(b7bb7cd5-cb1e-4575-9842-e88d47314fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d5d89d5cb-76h6t" podUID="b7bb7cd5-cb1e-4575-9842-e88d47314fe4" Mar 14 00:23:08.216923 containerd[1588]: time="2026-03-14T00:23:08.216825755Z" level=error msg="Failed to destroy network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.217968 containerd[1588]: time="2026-03-14T00:23:08.217848709Z" level=error msg="encountered an error cleaning up failed sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.217968 containerd[1588]: time="2026-03-14T00:23:08.217944851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjgfq,Uid:5170bc80-85e6-4371-b313-d56321f1c8e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.218534 kubelet[2797]: E0314 00:23:08.218433 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.218622 kubelet[2797]: E0314 00:23:08.218544 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rjgfq" Mar 14 00:23:08.218622 kubelet[2797]: E0314 00:23:08.218578 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rjgfq" Mar 14 00:23:08.218795 kubelet[2797]: E0314 00:23:08.218677 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rjgfq_kube-system(5170bc80-85e6-4371-b313-d56321f1c8e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rjgfq_kube-system(5170bc80-85e6-4371-b313-d56321f1c8e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rjgfq" podUID="5170bc80-85e6-4371-b313-d56321f1c8e2" Mar 14 00:23:08.223604 containerd[1588]: time="2026-03-14T00:23:08.223053293Z" level=info msg="StartContainer for \"d3abc2af39ef57794aa019d86560ba5981de69fd86751bd1ff5824dfa13d634a\" returns successfully" Mar 14 00:23:08.235872 containerd[1588]: time="2026-03-14T00:23:08.235682747Z" level=error msg="Failed to destroy network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.237153 containerd[1588]: time="2026-03-14T00:23:08.236541549Z" level=error msg="encountered an error cleaning up failed sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.237153 containerd[1588]: time="2026-03-14T00:23:08.236641699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xr6mf,Uid:23027385-ff4e-4dfa-87df-bf52afa804b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.238914 kubelet[2797]: E0314 00:23:08.237607 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.239911 kubelet[2797]: E0314 00:23:08.239677 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xr6mf" Mar 14 00:23:08.240021 kubelet[2797]: E0314 00:23:08.239925 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xr6mf" Mar 14 00:23:08.240067 kubelet[2797]: E0314 00:23:08.240001 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-xr6mf_calico-system(23027385-ff4e-4dfa-87df-bf52afa804b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-xr6mf_calico-system(23027385-ff4e-4dfa-87df-bf52afa804b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-xr6mf" podUID="23027385-ff4e-4dfa-87df-bf52afa804b0" Mar 14 00:23:08.359653 kubelet[2797]: I0314 00:23:08.357559 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:23:08.366862 kubelet[2797]: I0314 00:23:08.366169 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:23:08.374203 containerd[1588]: time="2026-03-14T00:23:08.374045983Z" level=info msg="StopPodSandbox for \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\"" Mar 14 00:23:08.380773 containerd[1588]: time="2026-03-14T00:23:08.378594167Z" level=info msg="Ensure that sandbox 84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470 in task-service has been cleanup successfully" Mar 14 00:23:08.382414 containerd[1588]: time="2026-03-14T00:23:08.380280000Z" level=info msg="StopPodSandbox for \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\"" Mar 14 00:23:08.382414 containerd[1588]: time="2026-03-14T00:23:08.382085642Z" level=info msg="Ensure that sandbox 970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f in task-service has been cleanup successfully" Mar 14 00:23:08.401211 kubelet[2797]: I0314 00:23:08.401176 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:23:08.414394 kubelet[2797]: I0314 00:23:08.408864 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h46mx" podStartSLOduration=4.188299113 podStartE2EDuration="29.408838429s" podCreationTimestamp="2026-03-14 00:22:39 +0000 UTC" firstStartedPulling="2026-03-14 00:22:39.670427695 +0000 UTC m=+25.347395632" lastFinishedPulling="2026-03-14 00:23:04.890967012 +0000 UTC m=+50.567934948" observedRunningTime="2026-03-14 00:23:08.406894814 +0000 UTC m=+54.083862770" watchObservedRunningTime="2026-03-14 00:23:08.408838429 +0000 UTC m=+54.085806396" Mar 14 00:23:08.414545 containerd[1588]: time="2026-03-14T00:23:08.410899796Z" level=info msg="StopPodSandbox for \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\"" Mar 14 00:23:08.414545 containerd[1588]: time="2026-03-14T00:23:08.412006720Z" level=info msg="Ensure that sandbox 24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5 in task-service has been cleanup successfully" Mar 14 00:23:08.419803 kubelet[2797]: I0314 00:23:08.419139 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:08.424065 containerd[1588]: time="2026-03-14T00:23:08.422680805Z" level=info msg="StopPodSandbox for \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\"" Mar 14 00:23:08.425839 containerd[1588]: time="2026-03-14T00:23:08.424339654Z" level=info msg="Ensure that sandbox a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5 in task-service has been cleanup successfully" Mar 14 00:23:08.426400 kubelet[2797]: I0314 00:23:08.426330 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:23:08.428827 containerd[1588]: time="2026-03-14T00:23:08.427197325Z" level=info msg="StopPodSandbox for \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\"" Mar 14 00:23:08.435302 kubelet[2797]: I0314 00:23:08.430814 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:23:08.435595 containerd[1588]: time="2026-03-14T00:23:08.431837833Z" level=info msg="Ensure that sandbox fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3 in task-service has been cleanup successfully" Mar 14 00:23:08.435595 containerd[1588]: time="2026-03-14T00:23:08.432764464Z" level=info msg="StopPodSandbox for \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\"" Mar 14 00:23:08.435595 containerd[1588]: time="2026-03-14T00:23:08.433047963Z" level=info msg="Ensure that sandbox 1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70 in task-service has been cleanup successfully" Mar 14 00:23:08.458421 kubelet[2797]: I0314 00:23:08.458381 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:23:08.464337 containerd[1588]: time="2026-03-14T00:23:08.462828254Z" level=info msg="StopPodSandbox for \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\"" Mar 14 00:23:08.464337 containerd[1588]: time="2026-03-14T00:23:08.463094710Z" level=info msg="Ensure that sandbox 26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17 in task-service has been cleanup successfully" Mar 14 00:23:08.661151 containerd[1588]: time="2026-03-14T00:23:08.661087071Z" level=error msg="StopPodSandbox for \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\" failed" error="failed to destroy network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.663915 kubelet[2797]: E0314 00:23:08.663580 2797 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:23:08.663915 kubelet[2797]: E0314 00:23:08.663670 2797 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3"} Mar 14 00:23:08.663915 kubelet[2797]: E0314 00:23:08.663839 2797 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5170bc80-85e6-4371-b313-d56321f1c8e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:08.663915 kubelet[2797]: E0314 00:23:08.663874 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5170bc80-85e6-4371-b313-d56321f1c8e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rjgfq" podUID="5170bc80-85e6-4371-b313-d56321f1c8e2" Mar 14 00:23:08.681404 containerd[1588]: time="2026-03-14T00:23:08.681168361Z" level=error msg="StopPodSandbox for \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\" failed" error="failed to destroy network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.682790 kubelet[2797]: E0314 00:23:08.682298 2797 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:23:08.682790 kubelet[2797]: E0314 00:23:08.682367 2797 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17"} Mar 14 00:23:08.682790 kubelet[2797]: E0314 00:23:08.682424 2797 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67d55006-38d6-455f-9fde-745c7e34d464\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:08.682790 kubelet[2797]: E0314 00:23:08.682456 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67d55006-38d6-455f-9fde-745c7e34d464\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tl9f4" podUID="67d55006-38d6-455f-9fde-745c7e34d464" Mar 14 00:23:08.690964 containerd[1588]: time="2026-03-14T00:23:08.690900763Z" level=error msg="StopPodSandbox for \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\" failed" error="failed to destroy network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.693108 kubelet[2797]: E0314 00:23:08.692026 2797 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:23:08.696085 kubelet[2797]: E0314 00:23:08.694632 2797 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f"} Mar 14 00:23:08.696085 kubelet[2797]: E0314 00:23:08.694805 2797 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23027385-ff4e-4dfa-87df-bf52afa804b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:08.696085 kubelet[2797]: E0314 00:23:08.694852 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23027385-ff4e-4dfa-87df-bf52afa804b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-xr6mf" podUID="23027385-ff4e-4dfa-87df-bf52afa804b0" Mar 14 00:23:08.704535 containerd[1588]: time="2026-03-14T00:23:08.704478798Z" level=error msg="StopPodSandbox for \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\" failed" error="failed to destroy network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.705877 containerd[1588]: time="2026-03-14T00:23:08.705117973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bblz,Uid:f1606a97-9d6b-48a1-9c1e-67441e5ad5ba,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:08.707772 kubelet[2797]: E0314 00:23:08.707198 2797 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:23:08.707772 kubelet[2797]: E0314 00:23:08.707306 2797 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470"} Mar 14 00:23:08.707772 kubelet[2797]: E0314 00:23:08.707359 2797 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7bb7cd5-cb1e-4575-9842-e88d47314fe4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:08.708286 kubelet[2797]: E0314 00:23:08.707419 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7bb7cd5-cb1e-4575-9842-e88d47314fe4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d5d89d5cb-76h6t" podUID="b7bb7cd5-cb1e-4575-9842-e88d47314fe4" Mar 14 00:23:08.708928 containerd[1588]: time="2026-03-14T00:23:08.708837682Z" level=error msg="StopPodSandbox for \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\" failed" error="failed to destroy network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.709334 kubelet[2797]: E0314 00:23:08.709304 2797 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:08.709609 kubelet[2797]: E0314 00:23:08.709445 2797 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5"} Mar 14 00:23:08.709609 kubelet[2797]: E0314 00:23:08.709488 2797 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7612124-f1e4-47f2-970d-51a3ea494c99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:08.709609 kubelet[2797]: E0314 00:23:08.709526 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7612124-f1e4-47f2-970d-51a3ea494c99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66bbbb9b4-8w5mx" podUID="e7612124-f1e4-47f2-970d-51a3ea494c99" Mar 14 00:23:08.725493 containerd[1588]: time="2026-03-14T00:23:08.725431750Z" level=error msg="StopPodSandbox for \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\" failed" error="failed to destroy network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.726392 kubelet[2797]: E0314 00:23:08.726299 2797 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:23:08.726966 kubelet[2797]: E0314 00:23:08.726848 2797 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5"} Mar 14 00:23:08.727682 kubelet[2797]: E0314 00:23:08.727586 2797 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"730ac9f6-6ce5-4082-a3ad-c868f729e031\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:08.728013 kubelet[2797]: E0314 00:23:08.727976 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"730ac9f6-6ce5-4082-a3ad-c868f729e031\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d5d89d5cb-dtq7x" podUID="730ac9f6-6ce5-4082-a3ad-c868f729e031" Mar 14 00:23:08.729601 containerd[1588]: time="2026-03-14T00:23:08.729480493Z" level=error msg="StopPodSandbox for \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\" failed" error="failed to destroy network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.730153 kubelet[2797]: E0314 00:23:08.729964 2797 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:23:08.730153 kubelet[2797]: E0314 00:23:08.730016 2797 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70"} Mar 14 00:23:08.730153 kubelet[2797]: E0314 00:23:08.730055 2797 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99edc0ed-4d50-4c4c-9806-84b2bb9168af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:08.730153 kubelet[2797]: E0314 00:23:08.730087 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99edc0ed-4d50-4c4c-9806-84b2bb9168af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f7859c787-l2px5" podUID="99edc0ed-4d50-4c4c-9806-84b2bb9168af" Mar 14 00:23:08.781348 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470-shm.mount: Deactivated successfully. Mar 14 00:23:08.782097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3-shm.mount: Deactivated successfully. Mar 14 00:23:08.785606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f-shm.mount: Deactivated successfully. Mar 14 00:23:08.785896 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70-shm.mount: Deactivated successfully. Mar 14 00:23:08.786737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5-shm.mount: Deactivated successfully. Mar 14 00:23:08.788550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5-shm.mount: Deactivated successfully. Mar 14 00:23:08.930805 containerd[1588]: time="2026-03-14T00:23:08.928139259Z" level=error msg="Failed to destroy network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.933355 containerd[1588]: time="2026-03-14T00:23:08.933171024Z" level=error msg="encountered an error cleaning up failed sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.933355 containerd[1588]: time="2026-03-14T00:23:08.933312573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bblz,Uid:f1606a97-9d6b-48a1-9c1e-67441e5ad5ba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.933793 kubelet[2797]: E0314 00:23:08.933661 2797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:08.933933 kubelet[2797]: E0314 00:23:08.933833 2797 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4bblz" Mar 14 00:23:08.933933 kubelet[2797]: E0314 00:23:08.933878 2797 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4bblz" Mar 14 00:23:08.934022 kubelet[2797]: E0314 00:23:08.933951 2797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4bblz_calico-system(f1606a97-9d6b-48a1-9c1e-67441e5ad5ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4bblz_calico-system(f1606a97-9d6b-48a1-9c1e-67441e5ad5ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4bblz" podUID="f1606a97-9d6b-48a1-9c1e-67441e5ad5ba" Mar 14 00:23:08.935132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5-shm.mount: Deactivated successfully. Mar 14 00:23:09.466477 kubelet[2797]: I0314 00:23:09.466356 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:09.468779 containerd[1588]: time="2026-03-14T00:23:09.468240919Z" level=info msg="StopPodSandbox for \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\"" Mar 14 00:23:09.470088 containerd[1588]: time="2026-03-14T00:23:09.469850016Z" level=info msg="StopPodSandbox for \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\"" Mar 14 00:23:09.473093 containerd[1588]: time="2026-03-14T00:23:09.470137904Z" level=info msg="Ensure that sandbox 2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5 in task-service has been cleanup successfully" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.670 [INFO][4205] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.670 [INFO][4205] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" iface="eth0" netns="/var/run/netns/cni-e7625806-4d72-b29e-6b84-3205d64b4ef3" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.671 [INFO][4205] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" iface="eth0" netns="/var/run/netns/cni-e7625806-4d72-b29e-6b84-3205d64b4ef3" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.675 [INFO][4205] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" iface="eth0" netns="/var/run/netns/cni-e7625806-4d72-b29e-6b84-3205d64b4ef3" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.675 [INFO][4205] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.675 [INFO][4205] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.716 [INFO][4241] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.717 [INFO][4241] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.717 [INFO][4241] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.728 [WARNING][4241] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.728 [INFO][4241] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.733 [INFO][4241] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:09.764411 containerd[1588]: 2026-03-14 00:23:09.752 [INFO][4205] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:09.764411 containerd[1588]: time="2026-03-14T00:23:09.764033096Z" level=info msg="TearDown network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\" successfully" Mar 14 00:23:09.764411 containerd[1588]: time="2026-03-14T00:23:09.764075286Z" level=info msg="StopPodSandbox for \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\" returns successfully" Mar 14 00:23:09.766231 systemd[1]: run-netns-cni\x2de7625806\x2d4d72\x2db29e\x2d6b84\x2d3205d64b4ef3.mount: Deactivated successfully. Mar 14 00:23:09.772046 containerd[1588]: time="2026-03-14T00:23:09.771901609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bblz,Uid:f1606a97-9d6b-48a1-9c1e-67441e5ad5ba,Namespace:calico-system,Attempt:1,}" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.668 [INFO][4208] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.669 [INFO][4208] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" iface="eth0" netns="/var/run/netns/cni-d29b1114-9db9-b765-6b18-fbf7e5a6c7f6" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.670 [INFO][4208] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" iface="eth0" netns="/var/run/netns/cni-d29b1114-9db9-b765-6b18-fbf7e5a6c7f6" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.675 [INFO][4208] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" iface="eth0" netns="/var/run/netns/cni-d29b1114-9db9-b765-6b18-fbf7e5a6c7f6" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.675 [INFO][4208] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.675 [INFO][4208] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.717 [INFO][4240] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.717 [INFO][4240] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.733 [INFO][4240] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.760 [WARNING][4240] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.760 [INFO][4240] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.765 [INFO][4240] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:09.777514 containerd[1588]: 2026-03-14 00:23:09.772 [INFO][4208] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:09.779255 containerd[1588]: time="2026-03-14T00:23:09.779075944Z" level=info msg="TearDown network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\" successfully" Mar 14 00:23:09.779255 containerd[1588]: time="2026-03-14T00:23:09.779115058Z" level=info msg="StopPodSandbox for \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\" returns successfully" Mar 14 00:23:09.782581 systemd[1]: run-netns-cni\x2dd29b1114\x2d9db9\x2db765\x2d6b18\x2dfbf7e5a6c7f6.mount: Deactivated successfully. Mar 14 00:23:09.882335 kubelet[2797]: I0314 00:23:09.882192 2797 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-backend-key-pair\") pod \"e7612124-f1e4-47f2-970d-51a3ea494c99\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " Mar 14 00:23:09.882335 kubelet[2797]: I0314 00:23:09.882282 2797 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdqn9\" (UniqueName: \"kubernetes.io/projected/e7612124-f1e4-47f2-970d-51a3ea494c99-kube-api-access-sdqn9\") pod \"e7612124-f1e4-47f2-970d-51a3ea494c99\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " Mar 14 00:23:09.882586 kubelet[2797]: I0314 00:23:09.882379 2797 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-ca-bundle\") pod \"e7612124-f1e4-47f2-970d-51a3ea494c99\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " Mar 14 00:23:09.882586 kubelet[2797]: I0314 00:23:09.882444 2797 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-nginx-config\") pod \"e7612124-f1e4-47f2-970d-51a3ea494c99\" (UID: \"e7612124-f1e4-47f2-970d-51a3ea494c99\") " Mar 14 00:23:09.887429 kubelet[2797]: I0314 00:23:09.887354 2797 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e7612124-f1e4-47f2-970d-51a3ea494c99" (UID: "e7612124-f1e4-47f2-970d-51a3ea494c99"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:23:09.887914 kubelet[2797]: I0314 00:23:09.887858 2797 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e7612124-f1e4-47f2-970d-51a3ea494c99" (UID: "e7612124-f1e4-47f2-970d-51a3ea494c99"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:23:09.899907 kubelet[2797]: I0314 00:23:09.899611 2797 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7612124-f1e4-47f2-970d-51a3ea494c99-kube-api-access-sdqn9" (OuterVolumeSpecName: "kube-api-access-sdqn9") pod "e7612124-f1e4-47f2-970d-51a3ea494c99" (UID: "e7612124-f1e4-47f2-970d-51a3ea494c99"). InnerVolumeSpecName "kube-api-access-sdqn9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:23:09.899907 kubelet[2797]: I0314 00:23:09.899843 2797 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e7612124-f1e4-47f2-970d-51a3ea494c99" (UID: "e7612124-f1e4-47f2-970d-51a3ea494c99"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:23:09.902378 systemd[1]: var-lib-kubelet-pods-e7612124\x2df1e4\x2d47f2\x2d970d\x2d51a3ea494c99-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:23:09.912128 systemd[1]: var-lib-kubelet-pods-e7612124\x2df1e4\x2d47f2\x2d970d\x2d51a3ea494c99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsdqn9.mount: Deactivated successfully. Mar 14 00:23:09.983629 kubelet[2797]: I0314 00:23:09.983489 2797 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 14 00:23:09.983629 kubelet[2797]: I0314 00:23:09.983541 2797 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sdqn9\" (UniqueName: \"kubernetes.io/projected/e7612124-f1e4-47f2-970d-51a3ea494c99-kube-api-access-sdqn9\") on node \"localhost\" DevicePath \"\"" Mar 14 00:23:09.983629 kubelet[2797]: I0314 00:23:09.983556 2797 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 14 00:23:09.983629 kubelet[2797]: I0314 00:23:09.983570 2797 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e7612124-f1e4-47f2-970d-51a3ea494c99-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 14 00:23:10.096480 systemd-networkd[1251]: cali710cf3a4414: Link UP Mar 14 00:23:10.099489 systemd-networkd[1251]: cali710cf3a4414: Gained carrier Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.855 [ERROR][4255] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.879 [INFO][4255] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4bblz-eth0 csi-node-driver- calico-system f1606a97-9d6b-48a1-9c1e-67441e5ad5ba 1010 0 2026-03-14 00:22:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4bblz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali710cf3a4414 [] [] }} ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.880 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.963 [INFO][4272] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" HandleID="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.980 [INFO][4272] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" HandleID="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fad80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4bblz", "timestamp":"2026-03-14 00:23:09.963985627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000428c60)} Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.980 [INFO][4272] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.980 [INFO][4272] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.980 [INFO][4272] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:09.986 [INFO][4272] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.000 [INFO][4272] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.011 [INFO][4272] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.015 [INFO][4272] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.029 [INFO][4272] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.029 [INFO][4272] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.034 [INFO][4272] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.050 [INFO][4272] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.068 [INFO][4272] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.068 [INFO][4272] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" host="localhost" Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.068 [INFO][4272] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:10.132842 containerd[1588]: 2026-03-14 00:23:10.068 [INFO][4272] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" HandleID="k8s-pod-network.845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:10.134416 containerd[1588]: 2026-03-14 00:23:10.073 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4bblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4bblz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali710cf3a4414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:10.134416 containerd[1588]: 2026-03-14 00:23:10.073 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:10.134416 containerd[1588]: 2026-03-14 00:23:10.073 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali710cf3a4414 ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:10.134416 containerd[1588]: 2026-03-14 00:23:10.097 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:10.134416 containerd[1588]: 2026-03-14 00:23:10.098 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4bblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f", Pod:"csi-node-driver-4bblz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali710cf3a4414", MAC:"aa:79:6b:07:70:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:10.134416 containerd[1588]: 2026-03-14 00:23:10.128 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f" Namespace="calico-system" Pod="csi-node-driver-4bblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:10.188476 containerd[1588]: time="2026-03-14T00:23:10.188175271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:10.188476 containerd[1588]: time="2026-03-14T00:23:10.188331598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:10.188476 containerd[1588]: time="2026-03-14T00:23:10.188351255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:10.188776 containerd[1588]: time="2026-03-14T00:23:10.188470031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:10.276031 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:10.313572 containerd[1588]: time="2026-03-14T00:23:10.313387852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bblz,Uid:f1606a97-9d6b-48a1-9c1e-67441e5ad5ba,Namespace:calico-system,Attempt:1,} returns sandbox id \"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f\"" Mar 14 00:23:10.317587 containerd[1588]: time="2026-03-14T00:23:10.317194670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:23:10.705582 kubelet[2797]: I0314 00:23:10.705419 2797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7612124-f1e4-47f2-970d-51a3ea494c99" path="/var/lib/kubelet/pods/e7612124-f1e4-47f2-970d-51a3ea494c99/volumes" Mar 14 00:23:10.801231 kubelet[2797]: I0314 00:23:10.801175 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a34dc971-ff0b-4031-a379-ad0b36dd9b45-nginx-config\") pod \"whisker-f7b649fc7-sp226\" (UID: \"a34dc971-ff0b-4031-a379-ad0b36dd9b45\") " pod="calico-system/whisker-f7b649fc7-sp226" Mar 14 00:23:10.801834 kubelet[2797]: I0314 00:23:10.801607 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsj2f\" (UniqueName: \"kubernetes.io/projected/a34dc971-ff0b-4031-a379-ad0b36dd9b45-kube-api-access-zsj2f\") pod \"whisker-f7b649fc7-sp226\" (UID: \"a34dc971-ff0b-4031-a379-ad0b36dd9b45\") " pod="calico-system/whisker-f7b649fc7-sp226" Mar 14 00:23:10.801834 kubelet[2797]: I0314 00:23:10.801667 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a34dc971-ff0b-4031-a379-ad0b36dd9b45-whisker-ca-bundle\") pod \"whisker-f7b649fc7-sp226\" (UID: \"a34dc971-ff0b-4031-a379-ad0b36dd9b45\") " pod="calico-system/whisker-f7b649fc7-sp226" Mar 14 00:23:10.801834 kubelet[2797]: I0314 00:23:10.801759 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a34dc971-ff0b-4031-a379-ad0b36dd9b45-whisker-backend-key-pair\") pod \"whisker-f7b649fc7-sp226\" (UID: \"a34dc971-ff0b-4031-a379-ad0b36dd9b45\") " pod="calico-system/whisker-f7b649fc7-sp226" Mar 14 00:23:11.084164 containerd[1588]: time="2026-03-14T00:23:11.084046504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7b649fc7-sp226,Uid:a34dc971-ff0b-4031-a379-ad0b36dd9b45,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:11.629568 kernel: calico-node[4442]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:23:11.698941 systemd-networkd[1251]: cali860b2e5822e: Link UP Mar 14 00:23:11.703940 systemd-networkd[1251]: cali860b2e5822e: Gained carrier Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.350 [ERROR][4459] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.405 [INFO][4459] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--f7b649fc7--sp226-eth0 whisker-f7b649fc7- calico-system a34dc971-ff0b-4031-a379-ad0b36dd9b45 1029 0 2026-03-14 00:23:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f7b649fc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-f7b649fc7-sp226 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali860b2e5822e [] [] }} ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.405 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-eth0" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.510 [INFO][4500] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" HandleID="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Workload="localhost-k8s-whisker--f7b649fc7--sp226-eth0" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.523 [INFO][4500] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" HandleID="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Workload="localhost-k8s-whisker--f7b649fc7--sp226-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-f7b649fc7-sp226", "timestamp":"2026-03-14 00:23:11.510971237 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000197080)} Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.523 [INFO][4500] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.523 [INFO][4500] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.524 [INFO][4500] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.532 [INFO][4500] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.559 [INFO][4500] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.579 [INFO][4500] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.588 [INFO][4500] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.593 [INFO][4500] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.593 [INFO][4500] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.602 [INFO][4500] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853 Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.617 [INFO][4500] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.660 [INFO][4500] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.660 [INFO][4500] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" host="localhost" Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.661 [INFO][4500] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:11.748024 containerd[1588]: 2026-03-14 00:23:11.661 [INFO][4500] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" HandleID="k8s-pod-network.f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Workload="localhost-k8s-whisker--f7b649fc7--sp226-eth0" Mar 14 00:23:11.749068 containerd[1588]: 2026-03-14 00:23:11.678 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f7b649fc7--sp226-eth0", GenerateName:"whisker-f7b649fc7-", Namespace:"calico-system", SelfLink:"", UID:"a34dc971-ff0b-4031-a379-ad0b36dd9b45", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7b649fc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-f7b649fc7-sp226", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali860b2e5822e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:11.749068 containerd[1588]: 2026-03-14 00:23:11.678 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-eth0" Mar 14 00:23:11.749068 containerd[1588]: 2026-03-14 00:23:11.678 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali860b2e5822e ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-eth0" Mar 14 00:23:11.749068 containerd[1588]: 2026-03-14 00:23:11.685 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-eth0" Mar 14 00:23:11.749068 containerd[1588]: 2026-03-14 00:23:11.686 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f7b649fc7--sp226-eth0", GenerateName:"whisker-f7b649fc7-", Namespace:"calico-system", SelfLink:"", UID:"a34dc971-ff0b-4031-a379-ad0b36dd9b45", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7b649fc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853", Pod:"whisker-f7b649fc7-sp226", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali860b2e5822e", MAC:"ce:66:2e:24:8d:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:11.749068 containerd[1588]: 2026-03-14 00:23:11.729 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853" Namespace="calico-system" Pod="whisker-f7b649fc7-sp226" WorkloadEndpoint="localhost-k8s-whisker--f7b649fc7--sp226-eth0" Mar 14 00:23:11.854967 containerd[1588]: time="2026-03-14T00:23:11.854080071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:11.854967 containerd[1588]: time="2026-03-14T00:23:11.854175542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:11.854967 containerd[1588]: time="2026-03-14T00:23:11.854193417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:11.854967 containerd[1588]: time="2026-03-14T00:23:11.854454643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:11.950678 systemd-networkd[1251]: cali710cf3a4414: Gained IPv6LL Mar 14 00:23:12.007862 systemd[1]: run-containerd-runc-k8s.io-f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853-runc.lpYyBJ.mount: Deactivated successfully. Mar 14 00:23:12.121107 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:12.324990 containerd[1588]: time="2026-03-14T00:23:12.324942954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7b649fc7-sp226,Uid:a34dc971-ff0b-4031-a379-ad0b36dd9b45,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853\"" Mar 14 00:23:12.474353 containerd[1588]: time="2026-03-14T00:23:12.474258361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:12.487764 containerd[1588]: time="2026-03-14T00:23:12.486995541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:23:12.488046 containerd[1588]: time="2026-03-14T00:23:12.487924388Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:12.509341 containerd[1588]: time="2026-03-14T00:23:12.509285302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:12.510905 containerd[1588]: time="2026-03-14T00:23:12.510868183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.193602608s" Mar 14 00:23:12.511523 containerd[1588]: time="2026-03-14T00:23:12.511496929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:23:12.517523 containerd[1588]: time="2026-03-14T00:23:12.517132011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:23:12.532799 containerd[1588]: time="2026-03-14T00:23:12.532484945Z" level=info msg="CreateContainer within sandbox \"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:23:12.659213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642800832.mount: Deactivated successfully. Mar 14 00:23:12.675758 containerd[1588]: time="2026-03-14T00:23:12.675594313Z" level=info msg="CreateContainer within sandbox \"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eabb9144c2258e7aa8d3716dd636eb4e2822b00ff11722663be8acf4f4ef7094\"" Mar 14 00:23:12.678792 containerd[1588]: time="2026-03-14T00:23:12.678055175Z" level=info msg="StartContainer for \"eabb9144c2258e7aa8d3716dd636eb4e2822b00ff11722663be8acf4f4ef7094\"" Mar 14 00:23:12.885123 containerd[1588]: time="2026-03-14T00:23:12.885072337Z" level=info msg="StartContainer for \"eabb9144c2258e7aa8d3716dd636eb4e2822b00ff11722663be8acf4f4ef7094\" returns successfully" Mar 14 00:23:13.285993 systemd-networkd[1251]: cali860b2e5822e: Gained IPv6LL Mar 14 00:23:13.313546 systemd-networkd[1251]: vxlan.calico: Link UP Mar 14 00:23:13.313555 systemd-networkd[1251]: vxlan.calico: Gained carrier Mar 14 00:23:13.921781 containerd[1588]: time="2026-03-14T00:23:13.920136985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:13.926807 containerd[1588]: time="2026-03-14T00:23:13.923344639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:23:13.929590 containerd[1588]: time="2026-03-14T00:23:13.927940262Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:13.933495 containerd[1588]: time="2026-03-14T00:23:13.933367071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:13.936089 containerd[1588]: time="2026-03-14T00:23:13.935938681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.418756054s" Mar 14 00:23:13.936089 containerd[1588]: time="2026-03-14T00:23:13.936052068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:23:13.946580 containerd[1588]: time="2026-03-14T00:23:13.945116205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:23:13.980565 containerd[1588]: time="2026-03-14T00:23:13.979814227Z" level=info msg="CreateContainer within sandbox \"f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:23:14.038761 containerd[1588]: time="2026-03-14T00:23:14.038649701Z" level=info msg="CreateContainer within sandbox \"f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3760b5430e0f0cd8b4da2a60e5dd18986cae11e3f790055217d36da2a173c80f\"" Mar 14 00:23:14.044630 containerd[1588]: time="2026-03-14T00:23:14.043415999Z" level=info msg="StartContainer for \"3760b5430e0f0cd8b4da2a60e5dd18986cae11e3f790055217d36da2a173c80f\"" Mar 14 00:23:14.399447 containerd[1588]: time="2026-03-14T00:23:14.399203701Z" level=info msg="StartContainer for \"3760b5430e0f0cd8b4da2a60e5dd18986cae11e3f790055217d36da2a173c80f\" returns successfully" Mar 14 00:23:14.664171 containerd[1588]: time="2026-03-14T00:23:14.663860883Z" level=info msg="StopPodSandbox for \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\"" Mar 14 00:23:14.817013 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.776 [WARNING][4738] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" WorkloadEndpoint="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.777 [INFO][4738] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.777 [INFO][4738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" iface="eth0" netns="" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.777 [INFO][4738] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.777 [INFO][4738] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.861 [INFO][4748] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.861 [INFO][4748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.861 [INFO][4748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.878 [WARNING][4748] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.878 [INFO][4748] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.884 [INFO][4748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:14.896321 containerd[1588]: 2026-03-14 00:23:14.888 [INFO][4738] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:14.897610 containerd[1588]: time="2026-03-14T00:23:14.897569578Z" level=info msg="TearDown network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\" successfully" Mar 14 00:23:14.897773 containerd[1588]: time="2026-03-14T00:23:14.897750101Z" level=info msg="StopPodSandbox for \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\" returns successfully" Mar 14 00:23:14.911166 containerd[1588]: time="2026-03-14T00:23:14.911052690Z" level=info msg="RemovePodSandbox for \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\"" Mar 14 00:23:14.916237 containerd[1588]: time="2026-03-14T00:23:14.915837758Z" level=info msg="Forcibly stopping sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\"" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.008 [WARNING][4773] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" WorkloadEndpoint="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.009 [INFO][4773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.009 [INFO][4773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" iface="eth0" netns="" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.009 [INFO][4773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.009 [INFO][4773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.088 [INFO][4781] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.089 [INFO][4781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.089 [INFO][4781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.110 [WARNING][4781] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.110 [INFO][4781] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" HandleID="k8s-pod-network.a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Workload="localhost-k8s-whisker--66bbbb9b4--8w5mx-eth0" Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.114 [INFO][4781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:15.126844 containerd[1588]: 2026-03-14 00:23:15.122 [INFO][4773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5" Mar 14 00:23:15.126844 containerd[1588]: time="2026-03-14T00:23:15.125954332Z" level=info msg="TearDown network for sandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\" successfully" Mar 14 00:23:15.136557 containerd[1588]: time="2026-03-14T00:23:15.136306075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:23:15.136557 containerd[1588]: time="2026-03-14T00:23:15.136436752Z" level=info msg="RemovePodSandbox \"a073288c54a734ecf3ae72a7c20c046a712429ed4e336dbc6945ddd508466cb5\" returns successfully" Mar 14 00:23:15.138104 containerd[1588]: time="2026-03-14T00:23:15.137811885Z" level=info msg="StopPodSandbox for \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\"" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.237 [WARNING][4799] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4bblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f", Pod:"csi-node-driver-4bblz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali710cf3a4414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.238 [INFO][4799] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.238 [INFO][4799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" iface="eth0" netns="" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.238 [INFO][4799] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.238 [INFO][4799] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.316 [INFO][4808] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.316 [INFO][4808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.316 [INFO][4808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.361 [WARNING][4808] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.362 [INFO][4808] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.369 [INFO][4808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:15.381401 containerd[1588]: 2026-03-14 00:23:15.375 [INFO][4799] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.384925 containerd[1588]: time="2026-03-14T00:23:15.383188133Z" level=info msg="TearDown network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\" successfully" Mar 14 00:23:15.384925 containerd[1588]: time="2026-03-14T00:23:15.383233971Z" level=info msg="StopPodSandbox for \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\" returns successfully" Mar 14 00:23:15.386651 containerd[1588]: time="2026-03-14T00:23:15.385815304Z" level=info msg="RemovePodSandbox for \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\"" Mar 14 00:23:15.386651 containerd[1588]: time="2026-03-14T00:23:15.386061011Z" level=info msg="Forcibly stopping sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\"" Mar 14 00:23:15.505232 containerd[1588]: time="2026-03-14T00:23:15.503969017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:15.509014 containerd[1588]: time="2026-03-14T00:23:15.508913250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:23:15.511174 containerd[1588]: time="2026-03-14T00:23:15.511080265Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:15.520274 containerd[1588]: time="2026-03-14T00:23:15.520210355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:15.521346 containerd[1588]: time="2026-03-14T00:23:15.521263261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.57610719s" Mar 14 00:23:15.521346 containerd[1588]: time="2026-03-14T00:23:15.521334027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:23:15.524025 containerd[1588]: time="2026-03-14T00:23:15.523994615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:23:15.539009 containerd[1588]: time="2026-03-14T00:23:15.538837641Z" level=info msg="CreateContainer within sandbox \"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:23:15.595197 containerd[1588]: time="2026-03-14T00:23:15.595128961Z" level=info msg="CreateContainer within sandbox \"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f56cf9d008934e6cdabc1671d5c095b145303ad3606583a899d92cbd5e9917cf\"" Mar 14 00:23:15.599502 containerd[1588]: time="2026-03-14T00:23:15.597188870Z" level=info msg="StartContainer for \"f56cf9d008934e6cdabc1671d5c095b145303ad3606583a899d92cbd5e9917cf\"" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.519 [WARNING][4827] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4bblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1606a97-9d6b-48a1-9c1e-67441e5ad5ba", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845751dd5dc755c4cfa5fa35db569d785734cd0a9be30d7dc96ea715ff230d2f", Pod:"csi-node-driver-4bblz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali710cf3a4414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.520 [INFO][4827] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.522 [INFO][4827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" iface="eth0" netns="" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.522 [INFO][4827] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.522 [INFO][4827] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.607 [INFO][4836] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.607 [INFO][4836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.607 [INFO][4836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.627 [WARNING][4836] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.627 [INFO][4836] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" HandleID="k8s-pod-network.2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Workload="localhost-k8s-csi--node--driver--4bblz-eth0" Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.658 [INFO][4836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:15.676157 containerd[1588]: 2026-03-14 00:23:15.670 [INFO][4827] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5" Mar 14 00:23:15.677468 containerd[1588]: time="2026-03-14T00:23:15.677433338Z" level=info msg="TearDown network for sandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\" successfully" Mar 14 00:23:15.687135 containerd[1588]: time="2026-03-14T00:23:15.687069261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:23:15.687374 containerd[1588]: time="2026-03-14T00:23:15.687303757Z" level=info msg="RemovePodSandbox \"2aa8d0fcbc0b76b6d3cd800dfcd808eb8ce1576ad3f2eb5a1e99b9ab12d640e5\" returns successfully" Mar 14 00:23:15.720647 systemd-journald[1176]: Under memory pressure, flushing caches. Mar 14 00:23:15.714325 systemd-resolved[1475]: Under memory pressure, flushing caches. Mar 14 00:23:15.714390 systemd-resolved[1475]: Flushed all caches. Mar 14 00:23:15.795313 containerd[1588]: time="2026-03-14T00:23:15.792918789Z" level=info msg="StartContainer for \"f56cf9d008934e6cdabc1671d5c095b145303ad3606583a899d92cbd5e9917cf\" returns successfully" Mar 14 00:23:16.576486 kubelet[2797]: I0314 00:23:16.576370 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4bblz" podStartSLOduration=32.368921879 podStartE2EDuration="37.576346842s" podCreationTimestamp="2026-03-14 00:22:39 +0000 UTC" firstStartedPulling="2026-03-14 00:23:10.315503592 +0000 UTC m=+55.992471538" lastFinishedPulling="2026-03-14 00:23:15.522928555 +0000 UTC m=+61.199896501" observedRunningTime="2026-03-14 00:23:16.573792586 +0000 UTC m=+62.250760553" watchObservedRunningTime="2026-03-14 00:23:16.576346842 +0000 UTC m=+62.253314788" Mar 14 00:23:16.577433 kubelet[2797]: I0314 00:23:16.577057 2797 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:23:16.583768 kubelet[2797]: I0314 00:23:16.581655 2797 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:23:17.416187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864522438.mount: Deactivated successfully. Mar 14 00:23:17.487856 containerd[1588]: time="2026-03-14T00:23:17.487626684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:17.489233 containerd[1588]: time="2026-03-14T00:23:17.489143545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:23:17.491632 containerd[1588]: time="2026-03-14T00:23:17.491432983Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:17.498140 containerd[1588]: time="2026-03-14T00:23:17.497676946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:17.499191 containerd[1588]: time="2026-03-14T00:23:17.499121847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.974891173s" Mar 14 00:23:17.499191 containerd[1588]: time="2026-03-14T00:23:17.499179336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:23:17.508642 containerd[1588]: time="2026-03-14T00:23:17.508142357Z" level=info msg="CreateContainer within sandbox \"f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:23:17.548972 containerd[1588]: time="2026-03-14T00:23:17.548673847Z" level=info msg="CreateContainer within sandbox \"f2835cdaa4285cc265533ea45029172ad193c0f736abd109a3bbfef170131853\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f5c1864c48c8df4ba2aae7eccd383dc5eb2ba2dc0b153d94d98b1986f10b3876\"" Mar 14 00:23:17.551040 containerd[1588]: time="2026-03-14T00:23:17.550992142Z" level=info msg="StartContainer for \"f5c1864c48c8df4ba2aae7eccd383dc5eb2ba2dc0b153d94d98b1986f10b3876\"" Mar 14 00:23:17.714028 containerd[1588]: time="2026-03-14T00:23:17.713762027Z" level=info msg="StartContainer for \"f5c1864c48c8df4ba2aae7eccd383dc5eb2ba2dc0b153d94d98b1986f10b3876\" returns successfully" Mar 14 00:23:18.621145 kubelet[2797]: I0314 00:23:18.620736 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f7b649fc7-sp226" podStartSLOduration=3.4831916339999998 podStartE2EDuration="8.620612954s" podCreationTimestamp="2026-03-14 00:23:10 +0000 UTC" firstStartedPulling="2026-03-14 00:23:12.363862691 +0000 UTC m=+58.040830626" lastFinishedPulling="2026-03-14 00:23:17.501284011 +0000 UTC m=+63.178251946" observedRunningTime="2026-03-14 00:23:18.619192508 +0000 UTC m=+64.296160604" watchObservedRunningTime="2026-03-14 00:23:18.620612954 +0000 UTC m=+64.297580891" Mar 14 00:23:20.700402 containerd[1588]: time="2026-03-14T00:23:20.697880109Z" level=info msg="StopPodSandbox for \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\"" Mar 14 00:23:20.700402 containerd[1588]: time="2026-03-14T00:23:20.698647111Z" level=info msg="StopPodSandbox for \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\"" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.837 [INFO][4955] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.838 [INFO][4955] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" iface="eth0" netns="/var/run/netns/cni-a7b29f5a-867e-36b1-53d4-66b290af071d" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.838 [INFO][4955] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" iface="eth0" netns="/var/run/netns/cni-a7b29f5a-867e-36b1-53d4-66b290af071d" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.839 [INFO][4955] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" iface="eth0" netns="/var/run/netns/cni-a7b29f5a-867e-36b1-53d4-66b290af071d" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.839 [INFO][4955] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.839 [INFO][4955] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.892 [INFO][4977] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.892 [INFO][4977] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.892 [INFO][4977] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.904 [WARNING][4977] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.904 [INFO][4977] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.910 [INFO][4977] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:20.917380 containerd[1588]: 2026-03-14 00:23:20.914 [INFO][4955] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:23:20.920356 containerd[1588]: time="2026-03-14T00:23:20.920270897Z" level=info msg="TearDown network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\" successfully" Mar 14 00:23:20.920356 containerd[1588]: time="2026-03-14T00:23:20.920314209Z" level=info msg="StopPodSandbox for \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\" returns successfully" Mar 14 00:23:20.924144 containerd[1588]: time="2026-03-14T00:23:20.922298247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-dtq7x,Uid:730ac9f6-6ce5-4082-a3ad-c868f729e031,Namespace:calico-system,Attempt:1,}" Mar 14 00:23:20.924929 systemd[1]: run-netns-cni\x2da7b29f5a\x2d867e\x2d36b1\x2d53d4\x2d66b290af071d.mount: Deactivated successfully. Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.854 [INFO][4966] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.855 [INFO][4966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" iface="eth0" netns="/var/run/netns/cni-2f19eb1e-9ab7-2b39-e297-ae5014b5ea4f" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.855 [INFO][4966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" iface="eth0" netns="/var/run/netns/cni-2f19eb1e-9ab7-2b39-e297-ae5014b5ea4f" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.856 [INFO][4966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" iface="eth0" netns="/var/run/netns/cni-2f19eb1e-9ab7-2b39-e297-ae5014b5ea4f" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.856 [INFO][4966] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.856 [INFO][4966] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.924 [INFO][4984] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.925 [INFO][4984] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.925 [INFO][4984] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.938 [WARNING][4984] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.938 [INFO][4984] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.941 [INFO][4984] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:20.954051 containerd[1588]: 2026-03-14 00:23:20.948 [INFO][4966] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:23:20.954051 containerd[1588]: time="2026-03-14T00:23:20.953303644Z" level=info msg="TearDown network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\" successfully" Mar 14 00:23:20.954051 containerd[1588]: time="2026-03-14T00:23:20.953339562Z" level=info msg="StopPodSandbox for \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\" returns successfully" Mar 14 00:23:20.954828 kubelet[2797]: E0314 00:23:20.953855 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:20.961785 containerd[1588]: time="2026-03-14T00:23:20.957076664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tl9f4,Uid:67d55006-38d6-455f-9fde-745c7e34d464,Namespace:kube-system,Attempt:1,}" Mar 14 00:23:20.961396 systemd[1]: run-netns-cni\x2d2f19eb1e\x2d9ab7\x2d2b39\x2de297\x2dae5014b5ea4f.mount: Deactivated successfully. Mar 14 00:23:21.279195 systemd-networkd[1251]: cali9d661cc222a: Link UP Mar 14 00:23:21.279584 systemd-networkd[1251]: cali9d661cc222a: Gained carrier Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.098 [INFO][4993] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0 calico-apiserver-d5d89d5cb- calico-system 730ac9f6-6ce5-4082-a3ad-c868f729e031 1082 0 2026-03-14 00:22:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5d89d5cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5d89d5cb-dtq7x eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9d661cc222a [] [] }} ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.099 [INFO][4993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.168 [INFO][5022] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" HandleID="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.187 [INFO][5022] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" HandleID="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000369960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-d5d89d5cb-dtq7x", "timestamp":"2026-03-14 00:23:21.168676725 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000548dc0)} Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.187 [INFO][5022] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.187 [INFO][5022] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.187 [INFO][5022] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.192 [INFO][5022] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.217 [INFO][5022] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.234 [INFO][5022] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.238 [INFO][5022] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.243 [INFO][5022] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.243 [INFO][5022] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.245 [INFO][5022] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89 Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.256 [INFO][5022] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.268 [INFO][5022] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.269 [INFO][5022] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" host="localhost" Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.269 [INFO][5022] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:21.317379 containerd[1588]: 2026-03-14 00:23:21.269 [INFO][5022] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" HandleID="k8s-pod-network.ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:21.318448 containerd[1588]: 2026-03-14 00:23:21.274 [INFO][4993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"730ac9f6-6ce5-4082-a3ad-c868f729e031", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5d89d5cb-dtq7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9d661cc222a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:21.318448 containerd[1588]: 2026-03-14 00:23:21.274 [INFO][4993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:21.318448 containerd[1588]: 2026-03-14 00:23:21.274 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d661cc222a ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:21.318448 containerd[1588]: 2026-03-14 00:23:21.279 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:21.318448 containerd[1588]: 2026-03-14 00:23:21.280 [INFO][4993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"730ac9f6-6ce5-4082-a3ad-c868f729e031", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89", Pod:"calico-apiserver-d5d89d5cb-dtq7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9d661cc222a", MAC:"ea:db:cd:a8:66:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:21.318448 containerd[1588]: 2026-03-14 00:23:21.311 [INFO][4993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-dtq7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:23:21.366750 containerd[1588]: time="2026-03-14T00:23:21.365490344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:21.366750 containerd[1588]: time="2026-03-14T00:23:21.365621844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:21.366750 containerd[1588]: time="2026-03-14T00:23:21.365640419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:21.366750 containerd[1588]: time="2026-03-14T00:23:21.365920023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:21.436931 systemd-networkd[1251]: cali0ede465171a: Link UP Mar 14 00:23:21.437505 systemd-networkd[1251]: cali0ede465171a: Gained carrier Mar 14 00:23:21.458236 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.133 [INFO][5005] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0 coredns-674b8bbfcf- kube-system 67d55006-38d6-455f-9fde-745c7e34d464 1083 0 2026-03-14 00:22:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-tl9f4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0ede465171a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.133 [INFO][5005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.202 [INFO][5030] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" HandleID="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.219 [INFO][5030] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" HandleID="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000383c10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-tl9f4", "timestamp":"2026-03-14 00:23:21.202248538 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff8c0)} Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.220 [INFO][5030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.269 [INFO][5030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.269 [INFO][5030] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.297 [INFO][5030] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.325 [INFO][5030] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.347 [INFO][5030] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.357 [INFO][5030] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.373 [INFO][5030] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.373 [INFO][5030] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.380 [INFO][5030] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871 Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.395 [INFO][5030] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.419 [INFO][5030] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.419 [INFO][5030] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" host="localhost" Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.420 [INFO][5030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:21.474568 containerd[1588]: 2026-03-14 00:23:21.420 [INFO][5030] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" HandleID="k8s-pod-network.0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:21.480409 containerd[1588]: 2026-03-14 00:23:21.428 [INFO][5005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67d55006-38d6-455f-9fde-745c7e34d464", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-tl9f4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ede465171a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:21.480409 containerd[1588]: 2026-03-14 00:23:21.429 [INFO][5005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:21.480409 containerd[1588]: 2026-03-14 00:23:21.429 [INFO][5005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ede465171a ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:21.480409 containerd[1588]: 2026-03-14 00:23:21.439 [INFO][5005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:21.480409 containerd[1588]: 2026-03-14 00:23:21.444 [INFO][5005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67d55006-38d6-455f-9fde-745c7e34d464", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871", Pod:"coredns-674b8bbfcf-tl9f4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ede465171a", MAC:"f2:27:60:a5:1c:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:21.480409 containerd[1588]: 2026-03-14 00:23:21.469 [INFO][5005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871" Namespace="kube-system" Pod="coredns-674b8bbfcf-tl9f4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:23:21.563960 containerd[1588]: time="2026-03-14T00:23:21.561519234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-dtq7x,Uid:730ac9f6-6ce5-4082-a3ad-c868f729e031,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89\"" Mar 14 00:23:21.575953 containerd[1588]: time="2026-03-14T00:23:21.572650470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:23:21.579353 containerd[1588]: time="2026-03-14T00:23:21.577419138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:21.579353 containerd[1588]: time="2026-03-14T00:23:21.577560157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:21.579353 containerd[1588]: time="2026-03-14T00:23:21.577582980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:21.579353 containerd[1588]: time="2026-03-14T00:23:21.577837775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:21.639341 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:21.720204 containerd[1588]: time="2026-03-14T00:23:21.720085291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tl9f4,Uid:67d55006-38d6-455f-9fde-745c7e34d464,Namespace:kube-system,Attempt:1,} returns sandbox id \"0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871\"" Mar 14 00:23:21.722290 kubelet[2797]: E0314 00:23:21.721368 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:21.729904 containerd[1588]: time="2026-03-14T00:23:21.729672945Z" level=info msg="CreateContainer within sandbox \"0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:23:21.824090 containerd[1588]: time="2026-03-14T00:23:21.822165329Z" level=info msg="CreateContainer within sandbox \"0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7516b48cf10b3be63d7bca280e4ebfdc5f01ca95284df5ef88ddc6794a8a7eb\"" Mar 14 00:23:21.824090 containerd[1588]: time="2026-03-14T00:23:21.823629190Z" level=info msg="StartContainer for \"d7516b48cf10b3be63d7bca280e4ebfdc5f01ca95284df5ef88ddc6794a8a7eb\"" Mar 14 00:23:22.121184 containerd[1588]: time="2026-03-14T00:23:22.111447122Z" level=info msg="StartContainer for \"d7516b48cf10b3be63d7bca280e4ebfdc5f01ca95284df5ef88ddc6794a8a7eb\" returns successfully" Mar 14 00:23:22.561242 systemd-networkd[1251]: cali9d661cc222a: Gained IPv6LL Mar 14 00:23:22.626134 systemd-networkd[1251]: cali0ede465171a: Gained IPv6LL Mar 14 00:23:22.645787 kubelet[2797]: E0314 00:23:22.645527 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:22.673293 kubelet[2797]: I0314 00:23:22.672553 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tl9f4" podStartSLOduration=64.672527453 podStartE2EDuration="1m4.672527453s" podCreationTimestamp="2026-03-14 00:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:22.669959094 +0000 UTC m=+68.346927050" watchObservedRunningTime="2026-03-14 00:23:22.672527453 +0000 UTC m=+68.349495400" Mar 14 00:23:22.694211 containerd[1588]: time="2026-03-14T00:23:22.694042352Z" level=info msg="StopPodSandbox for \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\"" Mar 14 00:23:22.705468 containerd[1588]: time="2026-03-14T00:23:22.705397117Z" level=info msg="StopPodSandbox for \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\"" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.848 [INFO][5235] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.848 [INFO][5235] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" iface="eth0" netns="/var/run/netns/cni-8fddb59b-95e7-49e9-8db7-59ea71611fc0" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.849 [INFO][5235] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" iface="eth0" netns="/var/run/netns/cni-8fddb59b-95e7-49e9-8db7-59ea71611fc0" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.849 [INFO][5235] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" iface="eth0" netns="/var/run/netns/cni-8fddb59b-95e7-49e9-8db7-59ea71611fc0" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.849 [INFO][5235] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.849 [INFO][5235] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.934 [INFO][5258] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.935 [INFO][5258] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.935 [INFO][5258] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.953 [WARNING][5258] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.953 [INFO][5258] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.963 [INFO][5258] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:22.975673 containerd[1588]: 2026-03-14 00:23:22.969 [INFO][5235] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:23:22.975673 containerd[1588]: time="2026-03-14T00:23:22.975022543Z" level=info msg="TearDown network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\" successfully" Mar 14 00:23:22.975673 containerd[1588]: time="2026-03-14T00:23:22.975054915Z" level=info msg="StopPodSandbox for \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\" returns successfully" Mar 14 00:23:22.977139 kubelet[2797]: E0314 00:23:22.975649 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:22.980366 containerd[1588]: time="2026-03-14T00:23:22.980245768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjgfq,Uid:5170bc80-85e6-4371-b313-d56321f1c8e2,Namespace:kube-system,Attempt:1,}" Mar 14 00:23:22.982170 systemd[1]: run-netns-cni\x2d8fddb59b\x2d95e7\x2d49e9\x2d8db7\x2d59ea71611fc0.mount: Deactivated successfully. Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.899 [INFO][5246] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.899 [INFO][5246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" iface="eth0" netns="/var/run/netns/cni-6ddc3b20-a10a-b029-2ba3-9bf6c8731eeb" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.900 [INFO][5246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" iface="eth0" netns="/var/run/netns/cni-6ddc3b20-a10a-b029-2ba3-9bf6c8731eeb" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.903 [INFO][5246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" iface="eth0" netns="/var/run/netns/cni-6ddc3b20-a10a-b029-2ba3-9bf6c8731eeb" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.904 [INFO][5246] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.904 [INFO][5246] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.965 [INFO][5265] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.966 [INFO][5265] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.967 [INFO][5265] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.982 [WARNING][5265] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.982 [INFO][5265] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.986 [INFO][5265] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:22.998106 containerd[1588]: 2026-03-14 00:23:22.994 [INFO][5246] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:23:22.999569 containerd[1588]: time="2026-03-14T00:23:22.999513323Z" level=info msg="TearDown network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\" successfully" Mar 14 00:23:22.999569 containerd[1588]: time="2026-03-14T00:23:22.999567747Z" level=info msg="StopPodSandbox for \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\" returns successfully" Mar 14 00:23:23.004124 containerd[1588]: time="2026-03-14T00:23:23.004066691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-76h6t,Uid:b7bb7cd5-cb1e-4575-9842-e88d47314fe4,Namespace:calico-system,Attempt:1,}" Mar 14 00:23:23.011283 systemd[1]: run-netns-cni\x2d6ddc3b20\x2da10a\x2db029\x2d2ba3\x2d9bf6c8731eeb.mount: Deactivated successfully. Mar 14 00:23:23.583756 systemd-networkd[1251]: cali2a7c5f92f74: Link UP Mar 14 00:23:23.586526 systemd-networkd[1251]: cali2a7c5f92f74: Gained carrier Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.315 [INFO][5274] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0 coredns-674b8bbfcf- kube-system 5170bc80-85e6-4371-b313-d56321f1c8e2 1107 0 2026-03-14 00:22:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rjgfq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2a7c5f92f74 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.316 [INFO][5274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.414 [INFO][5301] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" HandleID="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.434 [INFO][5301] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" HandleID="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000504c00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rjgfq", "timestamp":"2026-03-14 00:23:23.414669222 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff4a0)} Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.434 [INFO][5301] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.434 [INFO][5301] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.435 [INFO][5301] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.464 [INFO][5301] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.482 [INFO][5301] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.493 [INFO][5301] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.497 [INFO][5301] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.502 [INFO][5301] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.502 [INFO][5301] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.505 [INFO][5301] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9 Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.523 [INFO][5301] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.552 [INFO][5301] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.553 [INFO][5301] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" host="localhost" Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.553 [INFO][5301] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:23.626457 containerd[1588]: 2026-03-14 00:23:23.553 [INFO][5301] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" HandleID="k8s-pod-network.be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:23.627605 containerd[1588]: 2026-03-14 00:23:23.567 [INFO][5274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5170bc80-85e6-4371-b313-d56321f1c8e2", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rjgfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7c5f92f74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:23.627605 containerd[1588]: 2026-03-14 00:23:23.570 [INFO][5274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:23.627605 containerd[1588]: 2026-03-14 00:23:23.570 [INFO][5274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a7c5f92f74 ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:23.627605 containerd[1588]: 2026-03-14 00:23:23.590 [INFO][5274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:23.627605 containerd[1588]: 2026-03-14 00:23:23.596 [INFO][5274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5170bc80-85e6-4371-b313-d56321f1c8e2", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9", Pod:"coredns-674b8bbfcf-rjgfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7c5f92f74", MAC:"2a:49:f4:76:2e:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:23.627605 containerd[1588]: 2026-03-14 00:23:23.622 [INFO][5274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjgfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:23:23.657840 kubelet[2797]: E0314 00:23:23.657140 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:23.699768 containerd[1588]: time="2026-03-14T00:23:23.696566523Z" level=info msg="StopPodSandbox for \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\"" Mar 14 00:23:23.699768 containerd[1588]: time="2026-03-14T00:23:23.697214785Z" level=info msg="StopPodSandbox for \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\"" Mar 14 00:23:23.723658 systemd-networkd[1251]: cali2c6a144b3bd: Link UP Mar 14 00:23:23.738599 systemd-networkd[1251]: cali2c6a144b3bd: Gained carrier Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.346 [INFO][5287] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0 calico-apiserver-d5d89d5cb- calico-system b7bb7cd5-cb1e-4575-9842-e88d47314fe4 1108 0 2026-03-14 00:22:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5d89d5cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5d89d5cb-76h6t eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2c6a144b3bd [] [] }} ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.346 [INFO][5287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.468 [INFO][5308] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" HandleID="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.488 [INFO][5308] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" HandleID="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c3200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-d5d89d5cb-76h6t", "timestamp":"2026-03-14 00:23:23.468824366 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fc6e0)} Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.491 [INFO][5308] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.554 [INFO][5308] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.554 [INFO][5308] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.568 [INFO][5308] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.580 [INFO][5308] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.597 [INFO][5308] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.607 [INFO][5308] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.620 [INFO][5308] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.621 [INFO][5308] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.629 [INFO][5308] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.662 [INFO][5308] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.684 [INFO][5308] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.684 [INFO][5308] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" host="localhost" Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.684 [INFO][5308] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:23.799981 containerd[1588]: 2026-03-14 00:23:23.684 [INFO][5308] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" HandleID="k8s-pod-network.f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:23.800920 containerd[1588]: 2026-03-14 00:23:23.703 [INFO][5287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"b7bb7cd5-cb1e-4575-9842-e88d47314fe4", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5d89d5cb-76h6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2c6a144b3bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:23.800920 containerd[1588]: 2026-03-14 00:23:23.704 [INFO][5287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:23.800920 containerd[1588]: 2026-03-14 00:23:23.704 [INFO][5287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c6a144b3bd ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:23.800920 containerd[1588]: 2026-03-14 00:23:23.737 [INFO][5287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:23.800920 containerd[1588]: 2026-03-14 00:23:23.744 [INFO][5287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"b7bb7cd5-cb1e-4575-9842-e88d47314fe4", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d", Pod:"calico-apiserver-d5d89d5cb-76h6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2c6a144b3bd", MAC:"7e:cc:d5:25:63:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:23.800920 containerd[1588]: 2026-03-14 00:23:23.785 [INFO][5287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d" Namespace="calico-system" Pod="calico-apiserver-d5d89d5cb-76h6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:23:23.820385 containerd[1588]: time="2026-03-14T00:23:23.819409004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:23.820385 containerd[1588]: time="2026-03-14T00:23:23.819504777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:23.820385 containerd[1588]: time="2026-03-14T00:23:23.819526288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:23.823674 containerd[1588]: time="2026-03-14T00:23:23.819680592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:23.884479 containerd[1588]: time="2026-03-14T00:23:23.882940918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:23.884479 containerd[1588]: time="2026-03-14T00:23:23.883549679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:23.884479 containerd[1588]: time="2026-03-14T00:23:23.883573545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:23.906754 containerd[1588]: time="2026-03-14T00:23:23.901652985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:23.925399 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:24.069339 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:24.143677 containerd[1588]: time="2026-03-14T00:23:24.139349227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjgfq,Uid:5170bc80-85e6-4371-b313-d56321f1c8e2,Namespace:kube-system,Attempt:1,} returns sandbox id \"be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9\"" Mar 14 00:23:24.144596 kubelet[2797]: E0314 00:23:24.143417 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:24.173073 containerd[1588]: time="2026-03-14T00:23:24.172212133Z" level=info msg="CreateContainer within sandbox \"be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:23:24.182003 containerd[1588]: time="2026-03-14T00:23:24.181769783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5d89d5cb-76h6t,Uid:b7bb7cd5-cb1e-4575-9842-e88d47314fe4,Namespace:calico-system,Attempt:1,} returns sandbox id \"f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d\"" Mar 14 00:23:24.222220 containerd[1588]: time="2026-03-14T00:23:24.222121669Z" level=info msg="CreateContainer within sandbox \"be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb558a1917c21c3b3c15f05a68bdb4104a34f228335022a6702afe6c5e7c8227\"" Mar 14 00:23:24.227418 containerd[1588]: time="2026-03-14T00:23:24.227229737Z" level=info msg="StartContainer for \"cb558a1917c21c3b3c15f05a68bdb4104a34f228335022a6702afe6c5e7c8227\"" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.033 [INFO][5350] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.034 [INFO][5350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" iface="eth0" netns="/var/run/netns/cni-ad68c01e-79be-6b60-ab0a-8ac0640384d4" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.034 [INFO][5350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" iface="eth0" netns="/var/run/netns/cni-ad68c01e-79be-6b60-ab0a-8ac0640384d4" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.039 [INFO][5350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" iface="eth0" netns="/var/run/netns/cni-ad68c01e-79be-6b60-ab0a-8ac0640384d4" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.039 [INFO][5350] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.039 [INFO][5350] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.233 [INFO][5459] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.234 [INFO][5459] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.234 [INFO][5459] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.258 [WARNING][5459] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.258 [INFO][5459] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.264 [INFO][5459] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:24.292862 containerd[1588]: 2026-03-14 00:23:24.274 [INFO][5350] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:23:24.292862 containerd[1588]: time="2026-03-14T00:23:24.291587264Z" level=info msg="TearDown network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\" successfully" Mar 14 00:23:24.292862 containerd[1588]: time="2026-03-14T00:23:24.291630897Z" level=info msg="StopPodSandbox for \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\" returns successfully" Mar 14 00:23:24.292862 containerd[1588]: time="2026-03-14T00:23:24.292814234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7859c787-l2px5,Uid:99edc0ed-4d50-4c4c-9806-84b2bb9168af,Namespace:calico-system,Attempt:1,}" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.088 [INFO][5386] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.090 [INFO][5386] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" iface="eth0" netns="/var/run/netns/cni-045d646e-feb8-de47-bde2-ea141461dbb5" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.090 [INFO][5386] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" iface="eth0" netns="/var/run/netns/cni-045d646e-feb8-de47-bde2-ea141461dbb5" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.091 [INFO][5386] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" iface="eth0" netns="/var/run/netns/cni-045d646e-feb8-de47-bde2-ea141461dbb5" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.091 [INFO][5386] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.091 [INFO][5386] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.235 [INFO][5466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.238 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.265 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.278 [WARNING][5466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.278 [INFO][5466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.286 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:24.315270 containerd[1588]: 2026-03-14 00:23:24.302 [INFO][5386] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:23:24.317949 containerd[1588]: time="2026-03-14T00:23:24.317273086Z" level=info msg="TearDown network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\" successfully" Mar 14 00:23:24.317949 containerd[1588]: time="2026-03-14T00:23:24.317942974Z" level=info msg="StopPodSandbox for \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\" returns successfully" Mar 14 00:23:24.321042 containerd[1588]: time="2026-03-14T00:23:24.319792416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xr6mf,Uid:23027385-ff4e-4dfa-87df-bf52afa804b0,Namespace:calico-system,Attempt:1,}" Mar 14 00:23:24.426191 containerd[1588]: time="2026-03-14T00:23:24.424444674Z" level=info msg="StartContainer for \"cb558a1917c21c3b3c15f05a68bdb4104a34f228335022a6702afe6c5e7c8227\" returns successfully" Mar 14 00:23:24.665395 kubelet[2797]: E0314 00:23:24.664618 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:24.668988 kubelet[2797]: E0314 00:23:24.668641 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:24.677139 systemd-networkd[1251]: cali2a7c5f92f74: Gained IPv6LL Mar 14 00:23:24.716246 systemd-networkd[1251]: calif56391b34b7: Link UP Mar 14 00:23:24.718428 systemd-networkd[1251]: calif56391b34b7: Gained carrier Mar 14 00:23:24.729641 kubelet[2797]: I0314 00:23:24.729579 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rjgfq" podStartSLOduration=66.729557177 podStartE2EDuration="1m6.729557177s" podCreationTimestamp="2026-03-14 00:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:24.719172313 +0000 UTC m=+70.396140319" watchObservedRunningTime="2026-03-14 00:23:24.729557177 +0000 UTC m=+70.406525113" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.488 [INFO][5525] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--xr6mf-eth0 goldmane-5b85766d88- calico-system 23027385-ff4e-4dfa-87df-bf52afa804b0 1130 0 2026-03-14 00:22:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-xr6mf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif56391b34b7 [] [] }} ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.489 [INFO][5525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.568 [INFO][5570] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" HandleID="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.591 [INFO][5570] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" HandleID="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000389300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-xr6mf", "timestamp":"2026-03-14 00:23:24.568216786 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000616000)} Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.591 [INFO][5570] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.592 [INFO][5570] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.592 [INFO][5570] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.598 [INFO][5570] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.608 [INFO][5570] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.625 [INFO][5570] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.628 [INFO][5570] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.635 [INFO][5570] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.635 [INFO][5570] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.645 [INFO][5570] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.654 [INFO][5570] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.686 [INFO][5570] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.689 [INFO][5570] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" host="localhost" Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.689 [INFO][5570] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:24.786004 containerd[1588]: 2026-03-14 00:23:24.690 [INFO][5570] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" HandleID="k8s-pod-network.661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.788636 containerd[1588]: 2026-03-14 00:23:24.702 [INFO][5525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xr6mf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"23027385-ff4e-4dfa-87df-bf52afa804b0", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-xr6mf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif56391b34b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:24.788636 containerd[1588]: 2026-03-14 00:23:24.702 [INFO][5525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.788636 containerd[1588]: 2026-03-14 00:23:24.702 [INFO][5525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif56391b34b7 ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.788636 containerd[1588]: 2026-03-14 00:23:24.717 [INFO][5525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.788636 containerd[1588]: 2026-03-14 00:23:24.718 [INFO][5525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xr6mf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"23027385-ff4e-4dfa-87df-bf52afa804b0", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb", Pod:"goldmane-5b85766d88-xr6mf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif56391b34b7", MAC:"ae:7b:ae:b0:ef:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:24.788636 containerd[1588]: 2026-03-14 00:23:24.776 [INFO][5525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb" Namespace="calico-system" Pod="goldmane-5b85766d88-xr6mf" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:23:24.865878 systemd-networkd[1251]: cali2c6a144b3bd: Gained IPv6LL Mar 14 00:23:24.894972 systemd-networkd[1251]: caliccc323872c2: Link UP Mar 14 00:23:24.896568 systemd-networkd[1251]: caliccc323872c2: Gained carrier Mar 14 00:23:24.925450 containerd[1588]: time="2026-03-14T00:23:24.924996887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:24.925450 containerd[1588]: time="2026-03-14T00:23:24.925096247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:24.925450 containerd[1588]: time="2026-03-14T00:23:24.925116656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:24.925450 containerd[1588]: time="2026-03-14T00:23:24.925254519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.486 [INFO][5515] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0 calico-kube-controllers-6f7859c787- calico-system 99edc0ed-4d50-4c4c-9806-84b2bb9168af 1129 0 2026-03-14 00:22:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f7859c787 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f7859c787-l2px5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliccc323872c2 [] [] }} ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.486 [INFO][5515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.600 [INFO][5577] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" HandleID="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.610 [INFO][5577] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" HandleID="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058ab00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f7859c787-l2px5", "timestamp":"2026-03-14 00:23:24.600528125 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f2420)} Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.611 [INFO][5577] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.697 [INFO][5577] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.697 [INFO][5577] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.708 [INFO][5577] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.770 [INFO][5577] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.791 [INFO][5577] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.795 [INFO][5577] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.801 [INFO][5577] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.801 [INFO][5577] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.809 [INFO][5577] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.822 [INFO][5577] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.859 [INFO][5577] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.859 [INFO][5577] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" host="localhost" Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.859 [INFO][5577] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:24.935302 containerd[1588]: 2026-03-14 00:23:24.859 [INFO][5577] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" HandleID="k8s-pod-network.d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.936419 containerd[1588]: 2026-03-14 00:23:24.874 [INFO][5515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0", GenerateName:"calico-kube-controllers-6f7859c787-", Namespace:"calico-system", SelfLink:"", UID:"99edc0ed-4d50-4c4c-9806-84b2bb9168af", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7859c787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f7859c787-l2px5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccc323872c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:24.936419 containerd[1588]: 2026-03-14 00:23:24.877 [INFO][5515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.936419 containerd[1588]: 2026-03-14 00:23:24.877 [INFO][5515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccc323872c2 ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.936419 containerd[1588]: 2026-03-14 00:23:24.896 [INFO][5515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.936419 containerd[1588]: 2026-03-14 00:23:24.899 [INFO][5515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0", GenerateName:"calico-kube-controllers-6f7859c787-", Namespace:"calico-system", SelfLink:"", UID:"99edc0ed-4d50-4c4c-9806-84b2bb9168af", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7859c787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da", Pod:"calico-kube-controllers-6f7859c787-l2px5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccc323872c2", MAC:"12:43:a6:1e:dd:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:23:24.936419 containerd[1588]: 2026-03-14 00:23:24.924 [INFO][5515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da" Namespace="calico-system" Pod="calico-kube-controllers-6f7859c787-l2px5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:23:24.992445 systemd[1]: run-netns-cni\x2d045d646e\x2dfeb8\x2dde47\x2dbde2\x2dea141461dbb5.mount: Deactivated successfully. Mar 14 00:23:24.992857 systemd[1]: run-netns-cni\x2dad68c01e\x2d79be\x2d6b60\x2dab0a\x2d8ac0640384d4.mount: Deactivated successfully. Mar 14 00:23:25.039092 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:25.061358 containerd[1588]: time="2026-03-14T00:23:25.061195191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:25.062630 containerd[1588]: time="2026-03-14T00:23:25.061622746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:25.063129 containerd[1588]: time="2026-03-14T00:23:25.062620118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:25.063129 containerd[1588]: time="2026-03-14T00:23:25.062991796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:25.130365 systemd[1]: run-containerd-runc-k8s.io-d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da-runc.GsO3zr.mount: Deactivated successfully. Mar 14 00:23:25.162526 containerd[1588]: time="2026-03-14T00:23:25.162383634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xr6mf,Uid:23027385-ff4e-4dfa-87df-bf52afa804b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb\"" Mar 14 00:23:25.170574 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:23:25.220223 containerd[1588]: time="2026-03-14T00:23:25.219844460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7859c787-l2px5,Uid:99edc0ed-4d50-4c4c-9806-84b2bb9168af,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da\"" Mar 14 00:23:25.592494 containerd[1588]: time="2026-03-14T00:23:25.592346480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:25.593772 containerd[1588]: time="2026-03-14T00:23:25.593655569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:23:25.595681 containerd[1588]: time="2026-03-14T00:23:25.595564517Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:25.601051 containerd[1588]: time="2026-03-14T00:23:25.600834083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:25.602447 containerd[1588]: time="2026-03-14T00:23:25.602389188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.029542575s" Mar 14 00:23:25.602515 containerd[1588]: time="2026-03-14T00:23:25.602463069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:23:25.604540 containerd[1588]: time="2026-03-14T00:23:25.604400465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:23:25.611145 containerd[1588]: time="2026-03-14T00:23:25.610979147Z" level=info msg="CreateContainer within sandbox \"ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:23:25.639608 containerd[1588]: time="2026-03-14T00:23:25.639474248Z" level=info msg="CreateContainer within sandbox \"ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e618b04c6acba369464180a34c61b0b92e22eb5321e071f7ba6825a3abdb4d55\"" Mar 14 00:23:25.642344 containerd[1588]: time="2026-03-14T00:23:25.642297466Z" level=info msg="StartContainer for \"e618b04c6acba369464180a34c61b0b92e22eb5321e071f7ba6825a3abdb4d55\"" Mar 14 00:23:25.678461 kubelet[2797]: E0314 00:23:25.677149 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:25.694044 kubelet[2797]: E0314 00:23:25.693013 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:25.758181 containerd[1588]: time="2026-03-14T00:23:25.757430938Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:25.759279 containerd[1588]: time="2026-03-14T00:23:25.759232843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:23:25.764108 containerd[1588]: time="2026-03-14T00:23:25.764065270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 159.602526ms" Mar 14 00:23:25.764108 containerd[1588]: time="2026-03-14T00:23:25.764111748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:23:25.773928 containerd[1588]: time="2026-03-14T00:23:25.773100974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:23:25.780530 containerd[1588]: time="2026-03-14T00:23:25.780359588Z" level=info msg="CreateContainer within sandbox \"f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:23:25.841931 containerd[1588]: time="2026-03-14T00:23:25.841187180Z" level=info msg="CreateContainer within sandbox \"f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60ded2a22e15dabffc1b2b7514b62c27f238dc33f9816badb2fb0cec2ccd8a61\"" Mar 14 00:23:25.853082 containerd[1588]: time="2026-03-14T00:23:25.842574626Z" level=info msg="StartContainer for \"e618b04c6acba369464180a34c61b0b92e22eb5321e071f7ba6825a3abdb4d55\" returns successfully" Mar 14 00:23:25.859358 containerd[1588]: time="2026-03-14T00:23:25.858377858Z" level=info msg="StartContainer for \"60ded2a22e15dabffc1b2b7514b62c27f238dc33f9816badb2fb0cec2ccd8a61\"" Mar 14 00:23:25.953360 systemd-networkd[1251]: caliccc323872c2: Gained IPv6LL Mar 14 00:23:26.104246 containerd[1588]: time="2026-03-14T00:23:26.103922865Z" level=info msg="StartContainer for \"60ded2a22e15dabffc1b2b7514b62c27f238dc33f9816badb2fb0cec2ccd8a61\" returns successfully" Mar 14 00:23:26.466354 systemd-networkd[1251]: calif56391b34b7: Gained IPv6LL Mar 14 00:23:26.720985 kubelet[2797]: E0314 00:23:26.719597 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:26.813764 kubelet[2797]: I0314 00:23:26.812130 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-d5d89d5cb-76h6t" podStartSLOduration=48.229063213 podStartE2EDuration="49.812059461s" podCreationTimestamp="2026-03-14 00:22:37 +0000 UTC" firstStartedPulling="2026-03-14 00:23:24.186404868 +0000 UTC m=+69.863372805" lastFinishedPulling="2026-03-14 00:23:25.769401096 +0000 UTC m=+71.446369053" observedRunningTime="2026-03-14 00:23:26.74823268 +0000 UTC m=+72.425200617" watchObservedRunningTime="2026-03-14 00:23:26.812059461 +0000 UTC m=+72.489027397" Mar 14 00:23:27.723551 kubelet[2797]: I0314 00:23:27.723372 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:23:27.869776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656073746.mount: Deactivated successfully. Mar 14 00:23:28.087393 kubelet[2797]: I0314 00:23:28.087275 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-d5d89d5cb-dtq7x" podStartSLOduration=47.051051071 podStartE2EDuration="51.087246484s" podCreationTimestamp="2026-03-14 00:22:37 +0000 UTC" firstStartedPulling="2026-03-14 00:23:21.567892501 +0000 UTC m=+67.244860437" lastFinishedPulling="2026-03-14 00:23:25.604087914 +0000 UTC m=+71.281055850" observedRunningTime="2026-03-14 00:23:26.810054416 +0000 UTC m=+72.487022352" watchObservedRunningTime="2026-03-14 00:23:28.087246484 +0000 UTC m=+73.764214440" Mar 14 00:23:28.696539 kubelet[2797]: E0314 00:23:28.696186 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:28.696864 kubelet[2797]: E0314 00:23:28.696761 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:28.927895 containerd[1588]: time="2026-03-14T00:23:28.927605543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:28.930238 containerd[1588]: time="2026-03-14T00:23:28.930057044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:23:28.932750 containerd[1588]: time="2026-03-14T00:23:28.932585677Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:28.944913 containerd[1588]: time="2026-03-14T00:23:28.944293987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:28.945736 containerd[1588]: time="2026-03-14T00:23:28.945604045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.172448889s" Mar 14 00:23:28.945736 containerd[1588]: time="2026-03-14T00:23:28.945661936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:23:28.950025 containerd[1588]: time="2026-03-14T00:23:28.949644636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:23:28.960582 containerd[1588]: time="2026-03-14T00:23:28.960459462Z" level=info msg="CreateContainer within sandbox \"661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:23:28.992512 containerd[1588]: time="2026-03-14T00:23:28.992238541Z" level=info msg="CreateContainer within sandbox \"661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"94e256d1134b2c6cce82a3d1fe13213aee080ab881971d1bf124cf16b0f557b5\"" Mar 14 00:23:28.994970 containerd[1588]: time="2026-03-14T00:23:28.993237407Z" level=info msg="StartContainer for \"94e256d1134b2c6cce82a3d1fe13213aee080ab881971d1bf124cf16b0f557b5\"" Mar 14 00:23:29.253447 containerd[1588]: time="2026-03-14T00:23:29.253080349Z" level=info msg="StartContainer for \"94e256d1134b2c6cce82a3d1fe13213aee080ab881971d1bf124cf16b0f557b5\" returns successfully" Mar 14 00:23:29.804214 kubelet[2797]: I0314 00:23:29.804066 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-xr6mf" podStartSLOduration=49.027640215 podStartE2EDuration="52.80404057s" podCreationTimestamp="2026-03-14 00:22:37 +0000 UTC" firstStartedPulling="2026-03-14 00:23:25.171159051 +0000 UTC m=+70.848126987" lastFinishedPulling="2026-03-14 00:23:28.947559406 +0000 UTC m=+74.624527342" observedRunningTime="2026-03-14 00:23:29.797166614 +0000 UTC m=+75.474134570" watchObservedRunningTime="2026-03-14 00:23:29.80404057 +0000 UTC m=+75.481008536" Mar 14 00:23:30.827374 systemd[1]: run-containerd-runc-k8s.io-94e256d1134b2c6cce82a3d1fe13213aee080ab881971d1bf124cf16b0f557b5-runc.4Nic7t.mount: Deactivated successfully. Mar 14 00:23:31.545040 containerd[1588]: time="2026-03-14T00:23:31.544829329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:31.546839 containerd[1588]: time="2026-03-14T00:23:31.546795182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:23:31.551546 containerd[1588]: time="2026-03-14T00:23:31.550537761Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:31.555595 containerd[1588]: time="2026-03-14T00:23:31.555495114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:31.556466 containerd[1588]: time="2026-03-14T00:23:31.556384642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.606589329s" Mar 14 00:23:31.556466 containerd[1588]: time="2026-03-14T00:23:31.556451099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:23:31.624889 containerd[1588]: time="2026-03-14T00:23:31.624680574Z" level=info msg="CreateContainer within sandbox \"d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:23:31.662661 containerd[1588]: time="2026-03-14T00:23:31.662517835Z" level=info msg="CreateContainer within sandbox \"d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8a216cab207febde6401cb020f735db2e32235d5655067cd3dc3d2a0a67dc2e8\"" Mar 14 00:23:31.663560 containerd[1588]: time="2026-03-14T00:23:31.663509701Z" level=info msg="StartContainer for \"8a216cab207febde6401cb020f735db2e32235d5655067cd3dc3d2a0a67dc2e8\"" Mar 14 00:23:31.809907 containerd[1588]: time="2026-03-14T00:23:31.808641225Z" level=info msg="StartContainer for \"8a216cab207febde6401cb020f735db2e32235d5655067cd3dc3d2a0a67dc2e8\" returns successfully" Mar 14 00:23:33.162796 kubelet[2797]: I0314 00:23:33.159132 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f7859c787-l2px5" podStartSLOduration=47.815108344 podStartE2EDuration="54.159108061s" podCreationTimestamp="2026-03-14 00:22:39 +0000 UTC" firstStartedPulling="2026-03-14 00:23:25.223596005 +0000 UTC m=+70.900563940" lastFinishedPulling="2026-03-14 00:23:31.56759572 +0000 UTC m=+77.244563657" observedRunningTime="2026-03-14 00:23:32.974418833 +0000 UTC m=+78.651386829" watchObservedRunningTime="2026-03-14 00:23:33.159108061 +0000 UTC m=+78.836075997" Mar 14 00:23:34.703619 kubelet[2797]: E0314 00:23:34.703068 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:37.710162 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:59926.service - OpenSSH per-connection server daemon (10.0.0.1:59926). Mar 14 00:23:37.936215 sshd[6027]: Accepted publickey for core from 10.0.0.1 port 59926 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:23:37.954622 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:37.970628 systemd-logind[1559]: New session 10 of user core. Mar 14 00:23:37.986555 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:23:39.054238 sshd[6027]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:39.061955 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:59926.service: Deactivated successfully. Mar 14 00:23:39.066469 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:23:39.070403 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:23:39.075328 systemd-logind[1559]: Removed session 10. Mar 14 00:23:44.070566 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:53340.service - OpenSSH per-connection server daemon (10.0.0.1:53340). Mar 14 00:23:44.189087 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 53340 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:23:44.192410 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:44.213926 systemd-logind[1559]: New session 11 of user core. Mar 14 00:23:44.223333 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:23:44.587428 sshd[6093]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:44.596018 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:53340.service: Deactivated successfully. Mar 14 00:23:44.603654 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:23:44.604678 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:23:44.606518 systemd-logind[1559]: Removed session 11. Mar 14 00:23:49.603459 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:53352.service - OpenSSH per-connection server daemon (10.0.0.1:53352). Mar 14 00:23:49.715000 sshd[6115]: Accepted publickey for core from 10.0.0.1 port 53352 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:23:49.719060 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:49.740187 systemd-logind[1559]: New session 12 of user core. Mar 14 00:23:49.766889 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:23:50.082061 sshd[6115]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:50.091521 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:53352.service: Deactivated successfully. Mar 14 00:23:50.099682 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:23:50.099876 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:23:50.102433 systemd-logind[1559]: Removed session 12. Mar 14 00:23:55.098834 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:50450.service - OpenSSH per-connection server daemon (10.0.0.1:50450). Mar 14 00:23:55.227473 sshd[6148]: Accepted publickey for core from 10.0.0.1 port 50450 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:23:55.231541 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:55.273344 systemd-logind[1559]: New session 13 of user core. Mar 14 00:23:55.291885 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:23:55.544141 sshd[6148]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:55.550340 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:50450.service: Deactivated successfully. Mar 14 00:23:55.555786 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:23:55.556999 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:23:55.559202 systemd-logind[1559]: Removed session 13. Mar 14 00:24:00.572229 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:55702.service - OpenSSH per-connection server daemon (10.0.0.1:55702). Mar 14 00:24:00.669751 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 55702 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:00.675391 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:00.688986 systemd-logind[1559]: New session 14 of user core. Mar 14 00:24:00.704353 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:24:01.009935 sshd[6165]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:01.019842 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:55702.service: Deactivated successfully. Mar 14 00:24:01.032656 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:24:01.032920 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:24:01.047535 systemd-logind[1559]: Removed session 14. Mar 14 00:24:03.620303 kubelet[2797]: I0314 00:24:03.616673 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:06.060188 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:55716.service - OpenSSH per-connection server daemon (10.0.0.1:55716). Mar 14 00:24:06.187959 sshd[6231]: Accepted publickey for core from 10.0.0.1 port 55716 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:06.193782 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:06.219201 systemd-logind[1559]: New session 15 of user core. Mar 14 00:24:06.252398 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:24:06.699195 sshd[6231]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:06.712640 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:55716.service: Deactivated successfully. Mar 14 00:24:06.718953 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:24:06.720378 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:24:06.724013 systemd-logind[1559]: Removed session 15. Mar 14 00:24:10.696374 kubelet[2797]: E0314 00:24:10.696234 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:11.722399 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:53150.service - OpenSSH per-connection server daemon (10.0.0.1:53150). Mar 14 00:24:11.768876 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 53150 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:11.771292 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:11.778858 systemd-logind[1559]: New session 16 of user core. Mar 14 00:24:11.789362 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:24:11.993493 sshd[6271]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:11.999654 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:53150.service: Deactivated successfully. Mar 14 00:24:12.004867 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:24:12.006199 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:24:12.008823 systemd-logind[1559]: Removed session 16. Mar 14 00:24:15.704073 containerd[1588]: time="2026-03-14T00:24:15.703937553Z" level=info msg="StopPodSandbox for \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\"" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:15.889 [WARNING][6300] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0", GenerateName:"calico-kube-controllers-6f7859c787-", Namespace:"calico-system", SelfLink:"", UID:"99edc0ed-4d50-4c4c-9806-84b2bb9168af", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7859c787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da", Pod:"calico-kube-controllers-6f7859c787-l2px5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccc323872c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:15.891 [INFO][6300] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:15.891 [INFO][6300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" iface="eth0" netns="" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:15.891 [INFO][6300] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:15.891 [INFO][6300] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:16.145 [INFO][6308] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:16.147 [INFO][6308] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:16.148 [INFO][6308] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:16.170 [WARNING][6308] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:16.171 [INFO][6308] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:16.179 [INFO][6308] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:16.200161 containerd[1588]: 2026-03-14 00:24:16.188 [INFO][6300] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.240258 containerd[1588]: time="2026-03-14T00:24:16.240103335Z" level=info msg="TearDown network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\" successfully" Mar 14 00:24:16.240258 containerd[1588]: time="2026-03-14T00:24:16.240199673Z" level=info msg="StopPodSandbox for \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\" returns successfully" Mar 14 00:24:16.244933 containerd[1588]: time="2026-03-14T00:24:16.241379069Z" level=info msg="RemovePodSandbox for \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\"" Mar 14 00:24:16.244933 containerd[1588]: time="2026-03-14T00:24:16.241418752Z" level=info msg="Forcibly stopping sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\"" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.369 [WARNING][6325] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0", GenerateName:"calico-kube-controllers-6f7859c787-", Namespace:"calico-system", SelfLink:"", UID:"99edc0ed-4d50-4c4c-9806-84b2bb9168af", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7859c787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7a202ab44c79327219b96fd7ab4f619d699b7ccb50734cae93f0b66f07f49da", Pod:"calico-kube-controllers-6f7859c787-l2px5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccc323872c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.370 [INFO][6325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.370 [INFO][6325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" iface="eth0" netns="" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.371 [INFO][6325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.371 [INFO][6325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.451 [INFO][6334] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.452 [INFO][6334] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.452 [INFO][6334] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.471 [WARNING][6334] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.471 [INFO][6334] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" HandleID="k8s-pod-network.1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Workload="localhost-k8s-calico--kube--controllers--6f7859c787--l2px5-eth0" Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.482 [INFO][6334] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:16.495294 containerd[1588]: 2026-03-14 00:24:16.487 [INFO][6325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70" Mar 14 00:24:16.495294 containerd[1588]: time="2026-03-14T00:24:16.494353737Z" level=info msg="TearDown network for sandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\" successfully" Mar 14 00:24:16.519321 containerd[1588]: time="2026-03-14T00:24:16.518577754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:16.519321 containerd[1588]: time="2026-03-14T00:24:16.518800857Z" level=info msg="RemovePodSandbox \"1824b55bdc2bc42fd832a662fd7cf1b3324226a177e95165d2ee6f1cc19aac70\" returns successfully" Mar 14 00:24:16.520009 containerd[1588]: time="2026-03-14T00:24:16.519891945Z" level=info msg="StopPodSandbox for \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\"" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.621 [WARNING][6351] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"730ac9f6-6ce5-4082-a3ad-c868f729e031", ResourceVersion:"1354", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89", Pod:"calico-apiserver-d5d89d5cb-dtq7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9d661cc222a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.621 [INFO][6351] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.621 [INFO][6351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" iface="eth0" netns="" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.621 [INFO][6351] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.621 [INFO][6351] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.674 [INFO][6359] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.675 [INFO][6359] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.675 [INFO][6359] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.686 [WARNING][6359] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.686 [INFO][6359] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.692 [INFO][6359] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:16.700353 containerd[1588]: 2026-03-14 00:24:16.695 [INFO][6351] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.701281 containerd[1588]: time="2026-03-14T00:24:16.700419861Z" level=info msg="TearDown network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\" successfully" Mar 14 00:24:16.701281 containerd[1588]: time="2026-03-14T00:24:16.700460837Z" level=info msg="StopPodSandbox for \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\" returns successfully" Mar 14 00:24:16.702199 containerd[1588]: time="2026-03-14T00:24:16.701315230Z" level=info msg="RemovePodSandbox for \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\"" Mar 14 00:24:16.702199 containerd[1588]: time="2026-03-14T00:24:16.701353621Z" level=info msg="Forcibly stopping sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\"" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.803 [WARNING][6378] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"730ac9f6-6ce5-4082-a3ad-c868f729e031", ResourceVersion:"1354", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea815b424ac4c66decfd2f96071710a45851a05c8caec4dec843f96c68642f89", Pod:"calico-apiserver-d5d89d5cb-dtq7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9d661cc222a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.803 [INFO][6378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.803 [INFO][6378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" iface="eth0" netns="" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.804 [INFO][6378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.804 [INFO][6378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.855 [INFO][6387] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.855 [INFO][6387] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.855 [INFO][6387] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.866 [WARNING][6387] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.867 [INFO][6387] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" HandleID="k8s-pod-network.24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--dtq7x-eth0" Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.870 [INFO][6387] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:16.881808 containerd[1588]: 2026-03-14 00:24:16.876 [INFO][6378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5" Mar 14 00:24:16.883015 containerd[1588]: time="2026-03-14T00:24:16.881852466Z" level=info msg="TearDown network for sandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\" successfully" Mar 14 00:24:16.892733 containerd[1588]: time="2026-03-14T00:24:16.892456906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:16.892733 containerd[1588]: time="2026-03-14T00:24:16.892626771Z" level=info msg="RemovePodSandbox \"24c8c21a2ef04ed0ab8b43946361689e8937ba5e97ceee2602c5c3f40441d6a5\" returns successfully" Mar 14 00:24:16.893448 containerd[1588]: time="2026-03-14T00:24:16.893361619Z" level=info msg="StopPodSandbox for \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\"" Mar 14 00:24:17.012277 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:53166.service - OpenSSH per-connection server daemon (10.0.0.1:53166). Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.005 [WARNING][6404] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5170bc80-85e6-4371-b313-d56321f1c8e2", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9", Pod:"coredns-674b8bbfcf-rjgfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7c5f92f74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.006 [INFO][6404] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.006 [INFO][6404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" iface="eth0" netns="" Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.006 [INFO][6404] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.006 [INFO][6404] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.072 [INFO][6413] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.077 [INFO][6413] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.077 [INFO][6413] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.094 [WARNING][6413] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.094 [INFO][6413] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.099 [INFO][6413] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.111816 containerd[1588]: 2026-03-14 00:24:17.106 [INFO][6404] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.112580 containerd[1588]: time="2026-03-14T00:24:17.111874139Z" level=info msg="TearDown network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\" successfully" Mar 14 00:24:17.112580 containerd[1588]: time="2026-03-14T00:24:17.111917128Z" level=info msg="StopPodSandbox for \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\" returns successfully" Mar 14 00:24:17.113282 containerd[1588]: time="2026-03-14T00:24:17.113167426Z" level=info msg="RemovePodSandbox for \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\"" Mar 14 00:24:17.113282 containerd[1588]: time="2026-03-14T00:24:17.113212250Z" level=info msg="Forcibly stopping sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\"" Mar 14 00:24:17.148843 sshd[6411]: Accepted publickey for core from 10.0.0.1 port 53166 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:17.158187 sshd[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:17.181325 systemd-logind[1559]: New session 17 of user core. Mar 14 00:24:17.187975 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.254 [WARNING][6433] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5170bc80-85e6-4371-b313-d56321f1c8e2", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be03c71382b535504de01bc889e99bf3bb08563f684ce28f242cb3f9abbb65a9", Pod:"coredns-674b8bbfcf-rjgfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7c5f92f74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.255 [INFO][6433] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.255 [INFO][6433] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" iface="eth0" netns="" Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.255 [INFO][6433] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.255 [INFO][6433] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.331 [INFO][6448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.331 [INFO][6448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.332 [INFO][6448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.349 [WARNING][6448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.349 [INFO][6448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" HandleID="k8s-pod-network.fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Workload="localhost-k8s-coredns--674b8bbfcf--rjgfq-eth0" Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.357 [INFO][6448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.366909 containerd[1588]: 2026-03-14 00:24:17.361 [INFO][6433] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3" Mar 14 00:24:17.366909 containerd[1588]: time="2026-03-14T00:24:17.366269193Z" level=info msg="TearDown network for sandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\" successfully" Mar 14 00:24:17.387911 containerd[1588]: time="2026-03-14T00:24:17.387618661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:17.388065 containerd[1588]: time="2026-03-14T00:24:17.387934436Z" level=info msg="RemovePodSandbox \"fcb59799d91e4d594eea80283171ca93717c38626c74a0851747d8718cc308a3\" returns successfully" Mar 14 00:24:17.389413 containerd[1588]: time="2026-03-14T00:24:17.389316600Z" level=info msg="StopPodSandbox for \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\"" Mar 14 00:24:17.582386 sshd[6411]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:17.592804 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:53166.service: Deactivated successfully. Mar 14 00:24:17.598303 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:24:17.599578 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.483 [WARNING][6473] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xr6mf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"23027385-ff4e-4dfa-87df-bf52afa804b0", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb", Pod:"goldmane-5b85766d88-xr6mf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif56391b34b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.483 [INFO][6473] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.483 [INFO][6473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" iface="eth0" netns="" Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.483 [INFO][6473] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.484 [INFO][6473] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.566 [INFO][6482] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.566 [INFO][6482] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.567 [INFO][6482] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.582 [WARNING][6482] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.582 [INFO][6482] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.587 [INFO][6482] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.600870 containerd[1588]: 2026-03-14 00:24:17.596 [INFO][6473] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.602306 containerd[1588]: time="2026-03-14T00:24:17.600923894Z" level=info msg="TearDown network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\" successfully" Mar 14 00:24:17.602306 containerd[1588]: time="2026-03-14T00:24:17.600970982Z" level=info msg="StopPodSandbox for \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\" returns successfully" Mar 14 00:24:17.602306 containerd[1588]: time="2026-03-14T00:24:17.601827514Z" level=info msg="RemovePodSandbox for \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\"" Mar 14 00:24:17.602306 containerd[1588]: time="2026-03-14T00:24:17.601871697Z" level=info msg="Forcibly stopping sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\"" Mar 14 00:24:17.602332 systemd-logind[1559]: Removed session 17. Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.700 [WARNING][6504] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xr6mf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"23027385-ff4e-4dfa-87df-bf52afa804b0", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"661768dfc3da30ee4f77a0f50dbdf9621d2f6a27c0faa50d65a36865bad529eb", Pod:"goldmane-5b85766d88-xr6mf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif56391b34b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.700 [INFO][6504] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.701 [INFO][6504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" iface="eth0" netns="" Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.701 [INFO][6504] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.701 [INFO][6504] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.784 [INFO][6513] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.785 [INFO][6513] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.786 [INFO][6513] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.798 [WARNING][6513] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.799 [INFO][6513] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" HandleID="k8s-pod-network.970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Workload="localhost-k8s-goldmane--5b85766d88--xr6mf-eth0" Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.807 [INFO][6513] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.821264 containerd[1588]: 2026-03-14 00:24:17.812 [INFO][6504] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f" Mar 14 00:24:17.822058 containerd[1588]: time="2026-03-14T00:24:17.821291558Z" level=info msg="TearDown network for sandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\" successfully" Mar 14 00:24:17.833280 containerd[1588]: time="2026-03-14T00:24:17.833145597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:17.833280 containerd[1588]: time="2026-03-14T00:24:17.833280827Z" level=info msg="RemovePodSandbox \"970c9d3eb2ab58d86dbf33fcd093e6e9f3714c342927d6407bd487293eb8077f\" returns successfully" Mar 14 00:24:17.834934 containerd[1588]: time="2026-03-14T00:24:17.834768345Z" level=info msg="StopPodSandbox for \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\"" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:17.942 [WARNING][6530] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67d55006-38d6-455f-9fde-745c7e34d464", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871", Pod:"coredns-674b8bbfcf-tl9f4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ede465171a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:17.943 [INFO][6530] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:17.943 [INFO][6530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" iface="eth0" netns="" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:17.943 [INFO][6530] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:17.943 [INFO][6530] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:18.003 [INFO][6538] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:18.004 [INFO][6538] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:18.004 [INFO][6538] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:18.023 [WARNING][6538] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:18.023 [INFO][6538] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:18.026 [INFO][6538] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:18.035049 containerd[1588]: 2026-03-14 00:24:18.031 [INFO][6530] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.035049 containerd[1588]: time="2026-03-14T00:24:18.034867052Z" level=info msg="TearDown network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\" successfully" Mar 14 00:24:18.035049 containerd[1588]: time="2026-03-14T00:24:18.034898070Z" level=info msg="StopPodSandbox for \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\" returns successfully" Mar 14 00:24:18.036234 containerd[1588]: time="2026-03-14T00:24:18.035592372Z" level=info msg="RemovePodSandbox for \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\"" Mar 14 00:24:18.036234 containerd[1588]: time="2026-03-14T00:24:18.035635823Z" level=info msg="Forcibly stopping sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\"" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.111 [WARNING][6556] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67d55006-38d6-455f-9fde-745c7e34d464", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0847867bf02d4c463092174cd87c89877281a3f7edc9a08684076dad3131f871", Pod:"coredns-674b8bbfcf-tl9f4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ede465171a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.112 [INFO][6556] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.112 [INFO][6556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" iface="eth0" netns="" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.112 [INFO][6556] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.112 [INFO][6556] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.170 [INFO][6564] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.170 [INFO][6564] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.171 [INFO][6564] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.180 [WARNING][6564] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.181 [INFO][6564] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" HandleID="k8s-pod-network.26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Workload="localhost-k8s-coredns--674b8bbfcf--tl9f4-eth0" Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.185 [INFO][6564] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:18.198608 containerd[1588]: 2026-03-14 00:24:18.190 [INFO][6556] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17" Mar 14 00:24:18.198608 containerd[1588]: time="2026-03-14T00:24:18.198083814Z" level=info msg="TearDown network for sandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\" successfully" Mar 14 00:24:18.222996 containerd[1588]: time="2026-03-14T00:24:18.222807564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:18.222996 containerd[1588]: time="2026-03-14T00:24:18.222920184Z" level=info msg="RemovePodSandbox \"26355d4368d8cfe7ac2740b7fbc8e21b5aa308b41dfb190d7dc602480cc0db17\" returns successfully" Mar 14 00:24:18.225146 containerd[1588]: time="2026-03-14T00:24:18.224894707Z" level=info msg="StopPodSandbox for \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\"" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.316 [WARNING][6582] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"b7bb7cd5-cb1e-4575-9842-e88d47314fe4", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d", Pod:"calico-apiserver-d5d89d5cb-76h6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2c6a144b3bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.316 [INFO][6582] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.316 [INFO][6582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" iface="eth0" netns="" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.317 [INFO][6582] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.317 [INFO][6582] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.378 [INFO][6591] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.378 [INFO][6591] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.378 [INFO][6591] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.390 [WARNING][6591] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.391 [INFO][6591] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.399 [INFO][6591] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:18.412647 containerd[1588]: 2026-03-14 00:24:18.404 [INFO][6582] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.413789 containerd[1588]: time="2026-03-14T00:24:18.413544287Z" level=info msg="TearDown network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\" successfully" Mar 14 00:24:18.413789 containerd[1588]: time="2026-03-14T00:24:18.413593147Z" level=info msg="StopPodSandbox for \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\" returns successfully" Mar 14 00:24:18.416022 containerd[1588]: time="2026-03-14T00:24:18.415962241Z" level=info msg="RemovePodSandbox for \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\"" Mar 14 00:24:18.416157 containerd[1588]: time="2026-03-14T00:24:18.416040686Z" level=info msg="Forcibly stopping sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\"" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.517 [WARNING][6608] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0", GenerateName:"calico-apiserver-d5d89d5cb-", Namespace:"calico-system", SelfLink:"", UID:"b7bb7cd5-cb1e-4575-9842-e88d47314fe4", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5d89d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11bdd3727a872cc0bc299341f1349cc668e5f40b7ffd6a99c19ae3c4f65c29d", Pod:"calico-apiserver-d5d89d5cb-76h6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2c6a144b3bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.518 [INFO][6608] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.518 [INFO][6608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" iface="eth0" netns="" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.518 [INFO][6608] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.518 [INFO][6608] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.622 [INFO][6617] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.625 [INFO][6617] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.625 [INFO][6617] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.639 [WARNING][6617] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.639 [INFO][6617] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" HandleID="k8s-pod-network.84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Workload="localhost-k8s-calico--apiserver--d5d89d5cb--76h6t-eth0" Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.647 [INFO][6617] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:18.659972 containerd[1588]: 2026-03-14 00:24:18.655 [INFO][6608] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470" Mar 14 00:24:18.660835 containerd[1588]: time="2026-03-14T00:24:18.659954954Z" level=info msg="TearDown network for sandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\" successfully" Mar 14 00:24:18.670997 containerd[1588]: time="2026-03-14T00:24:18.670016047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:18.670997 containerd[1588]: time="2026-03-14T00:24:18.670439963Z" level=info msg="RemovePodSandbox \"84465de2b3ac731940028faf235eff077b6abe91ce94bc7a0c1954bf12490470\" returns successfully" Mar 14 00:24:22.626555 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:36876.service - OpenSSH per-connection server daemon (10.0.0.1:36876). Mar 14 00:24:22.690661 sshd[6627]: Accepted publickey for core from 10.0.0.1 port 36876 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:22.693425 sshd[6627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:22.702806 systemd-logind[1559]: New session 18 of user core. Mar 14 00:24:22.712869 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:24:23.037134 sshd[6627]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:23.070783 systemd[1]: Started sshd@18-10.0.0.62:22-10.0.0.1:36884.service - OpenSSH per-connection server daemon (10.0.0.1:36884). Mar 14 00:24:23.075847 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:36876.service: Deactivated successfully. Mar 14 00:24:23.090183 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:24:23.093561 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:24:23.098668 systemd-logind[1559]: Removed session 18. Mar 14 00:24:23.138421 sshd[6640]: Accepted publickey for core from 10.0.0.1 port 36884 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:23.167534 sshd[6640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:23.191851 systemd-logind[1559]: New session 19 of user core. Mar 14 00:24:23.205119 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:24:23.611928 sshd[6640]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:23.632675 systemd[1]: Started sshd@19-10.0.0.62:22-10.0.0.1:36900.service - OpenSSH per-connection server daemon (10.0.0.1:36900). Mar 14 00:24:23.634068 systemd[1]: sshd@18-10.0.0.62:22-10.0.0.1:36884.service: Deactivated successfully. Mar 14 00:24:23.647929 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:24:23.651135 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:24:23.658104 systemd-logind[1559]: Removed session 19. Mar 14 00:24:23.695441 sshd[6654]: Accepted publickey for core from 10.0.0.1 port 36900 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:23.698341 sshd[6654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:23.712366 systemd-logind[1559]: New session 20 of user core. Mar 14 00:24:23.719938 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:24:23.970822 sshd[6654]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:23.982417 systemd[1]: sshd@19-10.0.0.62:22-10.0.0.1:36900.service: Deactivated successfully. Mar 14 00:24:23.992151 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:24:23.994074 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:24:24.001575 systemd-logind[1559]: Removed session 20. Mar 14 00:24:28.988806 systemd[1]: Started sshd@20-10.0.0.62:22-10.0.0.1:36910.service - OpenSSH per-connection server daemon (10.0.0.1:36910). Mar 14 00:24:29.044061 sshd[6692]: Accepted publickey for core from 10.0.0.1 port 36910 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:29.053811 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:29.069629 systemd-logind[1559]: New session 21 of user core. Mar 14 00:24:29.077317 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:24:29.428900 sshd[6692]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:29.450501 systemd[1]: sshd@20-10.0.0.62:22-10.0.0.1:36910.service: Deactivated successfully. Mar 14 00:24:29.469087 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:24:29.473586 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:24:29.479040 systemd-logind[1559]: Removed session 21. Mar 14 00:24:29.696020 kubelet[2797]: E0314 00:24:29.693234 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:30.834766 systemd[1]: run-containerd-runc-k8s.io-94e256d1134b2c6cce82a3d1fe13213aee080ab881971d1bf124cf16b0f557b5-runc.GPiRUL.mount: Deactivated successfully. Mar 14 00:24:34.467210 systemd[1]: Started sshd@21-10.0.0.62:22-10.0.0.1:56754.service - OpenSSH per-connection server daemon (10.0.0.1:56754). Mar 14 00:24:34.530068 sshd[6758]: Accepted publickey for core from 10.0.0.1 port 56754 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:34.537130 sshd[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:34.580780 systemd-logind[1559]: New session 22 of user core. Mar 14 00:24:34.600351 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:24:34.950170 sshd[6758]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:34.960534 systemd[1]: sshd@21-10.0.0.62:22-10.0.0.1:56754.service: Deactivated successfully. Mar 14 00:24:34.970260 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:24:34.972623 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:24:34.977563 systemd-logind[1559]: Removed session 22. Mar 14 00:24:39.973320 systemd[1]: Started sshd@22-10.0.0.62:22-10.0.0.1:56760.service - OpenSSH per-connection server daemon (10.0.0.1:56760). Mar 14 00:24:40.039674 sshd[6798]: Accepted publickey for core from 10.0.0.1 port 56760 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:40.040429 sshd[6798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:40.074574 systemd-logind[1559]: New session 23 of user core. Mar 14 00:24:40.083983 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:24:40.412509 sshd[6798]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:40.425568 systemd[1]: Started sshd@23-10.0.0.62:22-10.0.0.1:32870.service - OpenSSH per-connection server daemon (10.0.0.1:32870). Mar 14 00:24:40.426945 systemd[1]: sshd@22-10.0.0.62:22-10.0.0.1:56760.service: Deactivated successfully. Mar 14 00:24:40.447256 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:24:40.455201 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:24:40.462409 systemd-logind[1559]: Removed session 23. Mar 14 00:24:40.511208 sshd[6812]: Accepted publickey for core from 10.0.0.1 port 32870 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:40.514444 sshd[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:40.524560 systemd-logind[1559]: New session 24 of user core. Mar 14 00:24:40.532239 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:24:41.354630 sshd[6812]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:41.367366 systemd[1]: sshd@23-10.0.0.62:22-10.0.0.1:32870.service: Deactivated successfully. Mar 14 00:24:41.373799 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:24:41.389181 systemd[1]: Started sshd@24-10.0.0.62:22-10.0.0.1:32886.service - OpenSSH per-connection server daemon (10.0.0.1:32886). Mar 14 00:24:41.389836 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:24:41.397147 systemd-logind[1559]: Removed session 24. Mar 14 00:24:41.503589 sshd[6867]: Accepted publickey for core from 10.0.0.1 port 32886 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:41.505642 sshd[6867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:41.515757 systemd-logind[1559]: New session 25 of user core. Mar 14 00:24:41.536801 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:24:42.998343 systemd[1]: Started sshd@25-10.0.0.62:22-10.0.0.1:32890.service - OpenSSH per-connection server daemon (10.0.0.1:32890). Mar 14 00:24:43.009760 sshd[6867]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:43.068916 systemd[1]: sshd@24-10.0.0.62:22-10.0.0.1:32886.service: Deactivated successfully. Mar 14 00:24:43.082327 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:24:43.085017 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:24:43.100174 systemd-logind[1559]: Removed session 25. Mar 14 00:24:43.280632 sshd[6896]: Accepted publickey for core from 10.0.0.1 port 32890 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:43.282434 sshd[6896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:43.303293 systemd-logind[1559]: New session 26 of user core. Mar 14 00:24:43.319230 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:24:43.693156 kubelet[2797]: E0314 00:24:43.692853 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:44.129002 sshd[6896]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:44.154363 systemd[1]: Started sshd@26-10.0.0.62:22-10.0.0.1:32892.service - OpenSSH per-connection server daemon (10.0.0.1:32892). Mar 14 00:24:44.156044 systemd[1]: sshd@25-10.0.0.62:22-10.0.0.1:32890.service: Deactivated successfully. Mar 14 00:24:44.165923 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:24:44.167666 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:24:44.174251 systemd-logind[1559]: Removed session 26. Mar 14 00:24:44.214905 sshd[6914]: Accepted publickey for core from 10.0.0.1 port 32892 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:44.220957 sshd[6914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:44.238850 systemd-logind[1559]: New session 27 of user core. Mar 14 00:24:44.252400 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:24:44.568397 sshd[6914]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:44.578995 systemd[1]: sshd@26-10.0.0.62:22-10.0.0.1:32892.service: Deactivated successfully. Mar 14 00:24:44.585853 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:24:44.586956 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:24:44.590920 systemd-logind[1559]: Removed session 27. Mar 14 00:24:45.692424 kubelet[2797]: E0314 00:24:45.692292 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:47.696374 kubelet[2797]: E0314 00:24:47.693872 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:48.695848 kubelet[2797]: E0314 00:24:48.694893 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:49.618291 systemd[1]: Started sshd@27-10.0.0.62:22-10.0.0.1:32902.service - OpenSSH per-connection server daemon (10.0.0.1:32902). Mar 14 00:24:49.688946 sshd[6951]: Accepted publickey for core from 10.0.0.1 port 32902 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:49.691492 sshd[6951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:49.718537 systemd-logind[1559]: New session 28 of user core. Mar 14 00:24:49.738378 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 14 00:24:50.066193 sshd[6951]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:50.077660 systemd[1]: sshd@27-10.0.0.62:22-10.0.0.1:32902.service: Deactivated successfully. Mar 14 00:24:50.084474 systemd[1]: session-28.scope: Deactivated successfully. Mar 14 00:24:50.089268 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Mar 14 00:24:50.093321 systemd-logind[1559]: Removed session 28. Mar 14 00:24:55.106267 systemd[1]: Started sshd@28-10.0.0.62:22-10.0.0.1:47212.service - OpenSSH per-connection server daemon (10.0.0.1:47212). Mar 14 00:24:55.232324 sshd[6986]: Accepted publickey for core from 10.0.0.1 port 47212 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:55.235853 sshd[6986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:55.276624 systemd-logind[1559]: New session 29 of user core. Mar 14 00:24:55.301203 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 14 00:24:55.631864 sshd[6986]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:55.653522 systemd[1]: sshd@28-10.0.0.62:22-10.0.0.1:47212.service: Deactivated successfully. Mar 14 00:24:55.669240 systemd[1]: session-29.scope: Deactivated successfully. Mar 14 00:24:55.669467 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. Mar 14 00:24:55.680035 systemd-logind[1559]: Removed session 29. Mar 14 00:24:57.692574 kubelet[2797]: E0314 00:24:57.692512 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:00.650343 systemd[1]: Started sshd@29-10.0.0.62:22-10.0.0.1:38158.service - OpenSSH per-connection server daemon (10.0.0.1:38158). Mar 14 00:25:00.997236 sshd[7003]: Accepted publickey for core from 10.0.0.1 port 38158 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:01.005612 sshd[7003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:01.032954 systemd-logind[1559]: New session 30 of user core. Mar 14 00:25:01.067489 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 14 00:25:01.650000 sshd[7003]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:01.658378 systemd[1]: sshd@29-10.0.0.62:22-10.0.0.1:38158.service: Deactivated successfully. Mar 14 00:25:01.662014 systemd-logind[1559]: Session 30 logged out. Waiting for processes to exit. Mar 14 00:25:01.662323 systemd[1]: session-30.scope: Deactivated successfully. Mar 14 00:25:01.666020 systemd-logind[1559]: Removed session 30. Mar 14 00:25:06.678152 systemd[1]: Started sshd@30-10.0.0.62:22-10.0.0.1:38164.service - OpenSSH per-connection server daemon (10.0.0.1:38164). Mar 14 00:25:06.799923 sshd[7060]: Accepted publickey for core from 10.0.0.1 port 38164 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:06.803492 sshd[7060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:06.838999 systemd-logind[1559]: New session 31 of user core. Mar 14 00:25:06.844558 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 14 00:25:07.261287 sshd[7060]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:07.283420 systemd[1]: sshd@30-10.0.0.62:22-10.0.0.1:38164.service: Deactivated successfully. Mar 14 00:25:07.294760 systemd-logind[1559]: Session 31 logged out. Waiting for processes to exit. Mar 14 00:25:07.297362 systemd[1]: session-31.scope: Deactivated successfully. Mar 14 00:25:07.301002 systemd-logind[1559]: Removed session 31.